title
stringlengths 1
827
⌀ | uuid
stringlengths 36
36
| pmc_id
stringlengths 5
8
| search_term
stringclasses 44
values | text
stringlengths 8
8.58M
|
---|---|---|---|---|
Electrode pooling can boost the yield of extracellular recordings with switchable silicon probes | e852fcbc-2a89-4eda-8007-fc3fbbc6681e | 8413349 | Physiology[mh] | Understanding brain function requires monitoring the complex pattern of activity distributed across many neuronal circuits. To this end, the BRAIN Initiative has called for the development of technologies for recording “dynamic neuronal activity from complete neural networks, over long periods, in all areas of the brain”, ideally “monitoring all neurons in a circuit” . Recent advances in the design and manufacturing of silicon-based neural probes have answered this challenge with new devices that have thousands of recording sites – . Still, the best methods sample neural circuits very sparsely , for example recording fewer than 10 4 cells in a mouse brain that has 10 8 . In many of these electrode array devices only a small fraction of the recording sites can be used at once. The reason is that neural signals must be brought out of the brain via wires, which take up much more volume than the recording sites themselves. For example, in one state-of-the-art silicon shank, each wire displaces thirty times more volume than a recording site once the shank is fully inserted in the brain . The current silicon arrays invariably displace more neurons than they record, and thus the goal of “monitoring all neurons” seems unattainable by simply scaling the present approach (but see ref. ). Clearly we need a way to increase the number of neurons recorded while avoiding a concomitant increase in the number of wires that enter the brain. A common approach by which a single wire can convey multiple analog signals is time-division multiplexing . A rapid switch cycles through the N input signals and connects each input to the output line for a brief interval (Fig. a). At the other end of the line, a synchronized switch demultiplexes the N signals again. In this way, a single wire carries signals from all its associated electrodes interleaved in time. The cycling rate of the switch is constrained by the sampling theorem : It should be at least twice the highest frequency component present in the signal. The raw voltage signals from extracellular electrodes include thermal noise that extends far into the Megahertz regime. Therefore an essential element of any such multiplexing scheme is an analog low-pass filter associated with each electrode. This anti-alias filter removes the high-frequency noise above a certain cut-off frequency. In practice, the cut-off is chosen to match the bandwidth of neuronal action potentials, typically 10 kHz. Then the multiplexer switch can safely cycle at a few times that cut-off frequency. Given the ubiquity of time-division multiplexing in communication electronics, what prevents its use for neural recording devices? One obstacle is the physical size of the anti-alias filter associated with each electrode. When implemented in CMOS technology, such a low-pass filter occupies an area much larger than the recording site itself , which would force the electrodes apart and prevent any high-density recording. What if one simply omitted the low-pass filter? In that case aliasing of high-frequency thermal fluctuations will increase the noise power in the recording by a factor equal to the number of electrodes N being multiplexed. One such device with a multiplexing factor of N = 128 has indeed proven unsuitable for recording action potentials, as the noise drowns out any signal . A recent design with a more modest N = 8 still produces noise power 4–15 times higher than in comparable systems without multiplexing . Other issues further limit the use of time-division multiplexing: The requirement for amplification, filtering, and rapid switching right next to the recording site means that electric power gets dissipated on location, heating up exactly the neurons one wants to monitor. Furthermore, the active electronics in the local amplifier are sensitive to light, which can produce artifacts during bright light flashes for optogenetic stimulation , . An alternative approach involves static electrode selection (Fig. b). Again, there is an electronic switch that connects the wire to one of many electrodes. However, the switch setting remains unchanged during the electrical recording. In this way, the low-pass filtering and amplification can occur at the other end of the wire, outside the brain, where space is less constrained. The switch itself requires only minimal circuitry that fits comfortably under each recording site, even at a pitch of 20 μm or less. Because there is no local amplification or dynamic switching, the issues of heat dissipation or photosensitivity do not arise. This method has been incorporated recently into flat electrode arrays as well as silicon prongs , , . It allows the user to choose one of many electrodes intelligently, for example, because it carries a strong signal from a neuron of interest. This strategy can increase the yield of neural recordings, but it does not increase the number of neurons per wire. On this background, we introduce a third method of mapping electrodes to wires: select multiple electrodes with suitable signals and connect them to the same wire (Fig. c). Instead of rapidly cycling the intervening switches, as in multiplexing, simply leave all those switches closed. This creates a “pool” of electrodes whose signals are averaged and transmitted on the same wire. At first, that approach seems counterproductive, as it mixes together recordings that one would like to analyze separately. How can one ever reconstruct which neural signal came from which electrode? Existing multi-electrode systems avoid this signal mixing at all cost, often quoting the low cross-talk between channels as a figure of merit. Instead, we will show that the pooled signal can be unmixed if one controls the switch settings carefully during the recording session. Under suitable conditions, this method can record many neurons per wire without appreciable loss of information. We emphasize that the ideal electrode array device to implement this recording method does not yet exist. It would be entirely within reach of current fabrication capabilities, but every new silicon probe design requires a substantial investment and consideration of various trade-offs. With this article, we hope to make the community of electrode users aware of the opportunities in this domain and start a discussion about future array designs that use intelligent electrode switching, adapted to various applications in basic and translational neuroscience.
Motivation for electrode pooling: spike trains are sparse in time A typical neuron may fire ~10 spikes/s on average . Each action potential lasts for ~1 ms. Therefore this neuron’s signal occupies <1% of the time axis in an extracellular recording (e.g., Fig. b). Sometimes additional neurons lie close enough to the same electrode to produce large spikes. That still leaves most of the time axis unused for signal transmission. Electrode pooling gives the experimenter the freedom to add more neurons to that signal by choosing other electrodes that carry large spikes. Eventually a limit will be reached when the spikes of different neurons collide and overlap in time so they can no longer be distinguished. These overlaps may be more common under conditions where neurons are synchronized to each other or to external events. The effects of pooling on spikes and noise What signal actually results when one connects two electrodes to the same wire? Figure a shows an idealized circuit for a hypothetical electrode array that allows electrode pooling. Here the common wire is connected via programmable switches to two recording sites. At each site i , the extracellular signal of nearby neurons reaches the shared wire through a total electrode impedance Z i . This impedance has contributions from the metal/saline interface and the external electrolyte bath , , typically amounting to 100 kΩ–1 MΩ. By comparison, the CMOS switches have low impedance, typically ~100 Ω , which we will ignore. In general, one must also consider the shunt impedance Z S in parallel to the amplifier input. This can result from current leaks along the long wires as well as the internal input impedance of the amplifier. For well-designed systems, this shunt impedance should be much larger than the electrode impedances; for the Neuropixels device, we will show that the ratio is at least 100. Thus one can safely ignore it for the purpose of the present approximations. In that case, the circuit acts as a simple voltage divider between the impedances Z i . If a total of M electrodes are connected to the shared wire, the output voltage U is the average of the signals at the recording sites V i , weighted inversely by the electrode impedances, 1 [12pt]{minimal}
$$U=_{i=1}^{M}{c}_{i}{V}_{i}$$ U = ∑ i = 1 M c i V i where 2 [12pt]{minimal}
$${c}_{i}=_{i}}{_{j=1}^{M}1/{Z}_{j}}$$ c i = 1 / Z i ∑ j = 1 M 1 / Z j is defined as the pooling coefficient for electrode i . If all electrodes have the same size and surface coating, they will have the same impedance, and in that limit one expects the simple relationship 3 [12pt]{minimal}
$$U=_{i=1}^{M}{V}_{i}$$ U = 1 M ∑ i = 1 M V i Thus an action potential that appears on only one of the M electrodes will be attenuated in the pooled signal by a factor [12pt]{minimal}
$$$$ 1 M . In order to understand the trade-offs of this method, we must similarly account for the pooling of noise (Fig. a). There are three relevant sources of noise: (1) thermal (“Johnson”) noise from the impedance of the electrode; (2) biological noise (“hash”) from many distant neurons whose signals are too small to be resolved; (3) electronic noise resulting from the downstream acquisition system, including amplifier, multiplexer, and analog-to-digital converter. The thermal noise is private to each electrode, in the sense that it is statistically independent of the noise at another electrode. The biological noise is similar across neighboring electrodes that observe the same distant populations . For widely separated electrodes the hash will be independent and thus private to each electrode, although details depend on the neuronal geometries and the degree of synchronization of distant neurons . In that case the private noise is 4 [12pt]{minimal}
$${N}_{{{{{{{{}}}}}}},i}=_{{{{{{{{}}}}}}},i}^{2}+{N}_{{{{{{{{}}}}}}},i}^{2}}$$ N pri , i = N the , i 2 + N bio , i 2 because thermal noise and biological noise are additive and statistically independent. Finally the noise introduced by the amplifier and data acquisition is common to all the electrodes that share the same wire, 5 [12pt]{minimal}
$${N}_{{{{{{{{}}}}}}}}={N}_{{{{{{{{}}}}}}}}$$ N com = N amp In the course of pooling, the private noise gets attenuated by the pooling coefficient c i (Eq. and summed with contributions from other electrodes. Then the pooled private noise gets added to the common noise from data acquisition, which again is statistically independent of the other noise sources. With these assumptions the total noise at the output has RMS amplitude 6 [12pt]{minimal}
$${N}_{{{{{{{{}}}}}}}}=_{{{{{{{{}}}}}}}}^{2}+_{i=1}^{M}{c}_{i}^{2}{N}_{{{{{{{{}}}}}}}i}^{2}}$$ N tot = N com 2 + ∑ i = 1 M c i 2 N pri, i 2 If all electrodes have similar noise properties and impedances this simplifies to 7 [12pt]{minimal}
$${N}_{{{{{{{{}}}}}}}}=_{{{{{{{{}}}}}}}}^{2}+{N}_{{{{{{{{}}}}}}}}^{2}/M}$$ N tot = N com 2 + N pri 2 / M Theoretical benefits of pooling Now we are in a position to estimate the benefits from electrode pooling. Suppose that the electrode array records neurons with a range of spike amplitudes: from the largest, with spike amplitude [12pt]{minimal}
$${S}_{ }$$ S max , to the smallest that can still be sorted reliably from the noise, with amplitude [12pt]{minimal}
$${S}_{ }$$ S min . To create the most favorable conditions for pooling one would select electrodes that each carry a single neuron, with spike amplitude [12pt]{minimal}
$$ {S}_{ }$$ ~ S max (Fig. c). As one adds more of these electrodes to the pool, there comes a point when the pooled signal is so attenuated that the spikes are no longer sortable from the noise. Pooling is beneficial as long as the signal-to-noise ratio of spikes in the pooled signal is larger than that of the smallest sortable spikes in a single-site recording, namely 8 [12pt]{minimal}
$$_{ }/M}{_{{{{{{{{}}}}}}}}^{2}+{N}_{{{{{{{{}}}}}}}}^{2}/M}} > _{ }}{_{{{{{{{{}}}}}}}}^{2}+{N}_{{{{{{{{}}}}}}}}^{2}}}$$ S max / M N com 2 + N pri 2 / M > S min N com 2 + N pri 2 This leads to a limit on the pool size M , 9 [12pt]{minimal}
$$M\, < \, {M}_{ }=^{2}}{2})}^{2}+(1+{ }^{2}){ }^{2}}-^{2}}{2}$$ M < M max = β 2 2 2 + ( 1 + β 2 ) α 2 − β 2 2 where 10 [12pt]{minimal}
$$ ={S}_{ }/{S}_{ },\ ={N}_{{{{{{{{}}}}}}}}/{N}_{{{{{{{{}}}}}}}}$$ α = S max / S min , β = N pri / N com If one pools more than [12pt]{minimal}
$${M}_{ }$$ M max electrodes all the neurons drop below the threshold for sorting. So the optimal pool size [12pt]{minimal}
$${M}_{ }$$ M max is also the largest achievable number of neurons per wire. This number depends on two parameters: the ratio of private to common noise, and the ratio of largest to smallest useful spike amplitudes (Eq. . These parameters vary across applications, because they depend on the target brain area, the recording hardware, and the spike-sorting software. In general, users can estimate the parameters α and β from their own experience with conventional recordings, and find [12pt]{minimal}
$${M}_{ }$$ M max from the graph in Fig. c. Next we consider a more generic situation, in which each electrode carries a range of spikes from different neurons (Fig. d). For simplicity, we assume a uniform distribution of spike amplitudes between 0 and [12pt]{minimal}
$${S}_{ }$$ S max . As more electrodes are added to the pool, all the spikes are attenuated, so the smallest action potentials drop below the sortable threshold [12pt]{minimal}
$${S}_{ }$$ S min . Beyond a certain optimal pool size, more spikes are lost in the noise than are added at the top of the distribution, and the total number of neurons decreases. By the same arguments used above one finds that the improvement in the number of sortable neurons, n M , relative to conventional split recording, n 1 , is 11 [12pt]{minimal}
$$_{M}}{{n}_{1}}=^{2}/M}{1+{ }^{2}}})}{ -1}$$ n M n 1 = M α − M 1 + β 2 / M 1 + β 2 α − 1 The optimal pool size M max is the M which maximizes that factor. The results are plotted in Fig. d. The benefits of pooling are quite substantial if the user can select electrodes that carry large spikes. For example under conditions of α and β that we have encountered in practice, Fig. c predicts that one can pool 8 electrodes and still resolve all the signals, thus increasing the neuron/wire ratio by a factor of 8. On the other extreme—with a uniform distribution of spike amplitudes—the optimal pool of 4 electrodes increases the neuron/wire ratio by a more modest but still respectable factor of 2.3 compared to conventional recording. The following section explains how one can maximize that yield. Acquisition and analysis of pooled recordings With these insights about the constraints posed by signal and noise one can propose an overall workflow for experiments using electrode pooling (Fig. a). A key requirement is that the experimenter can control the switches that map electrodes to wires. This map should be adjusted to the unpredictable contingencies of any particular neural recording experiment. In fact, the experimenter will benefit from using different switch settings during the same session. A recording session begins with a short period of acquisition in “split mode” with only one electrode per wire. The purpose is to acquire samples of the spike waveforms from all neurons that might be recorded by the entire array. If the device has E electrodes and W wires, this sampling stage will require E / W segments of recording to cover all electrodes. For each segment, the switches are reset to select a different batch of electrodes. Each batch should cover a local group of electrodes, ensuring that the entire “footprint” of each neuron can be captured. During this sampling stage, the experimenter performs a quick analysis to extract the relevant data that will inform the pooling process. In particular, this yields a catalog of single neurons that can be extracted by spike-sorting. For each of those neurons, one has the spike waveform observed on each electrode. Finally, for every electrode one measures the total noise. The amplifier noise N amp and thermal noise N the can be assessed ahead of time, because they are a property of the recording system, and from them, one obtains the biological noise N bio . Now the experimenter has all the information needed to form useful electrode pools. Some principles one should consider in this process are: Pool electrodes that carry large signals. Electrodes with smaller signals can contribute to smaller pools. Pool electrodes with distinct spike waveforms. Pool distant electrodes that don’t share the same hash noise. Don’t pool electrodes that carry dense signals with high firing rates. After allocating the available wires to effective electrode pools one begins the main recording session in pooled mode. Ideally this phase captures all neurons with spike signals that are within reach of the electrode array. In analyzing these recordings the goal is to detect spikes in the pooled signals and assign each spike correctly to its electrode of origin. This can be achieved by using the split-mode recordings from the early sampling stage of the experiment. From the spike waveforms obtained in split-mode, one can predict how the corresponding spike appears in the pooled signals. Here it helps to know all the electrode impedances Z i so the weighted mix can be computed accurately (Eq. . This prediction serves as a search template for spike-sorting the pooled recording. By its very nature electrode pooling produces a dense neural signal with more instances of temporal overlap between spikes than the typical split-mode recording. This places special demands on the methods for spike detection and sorting. The conventional cluster-based algorithm (peak detection–temporal alignment–PCA–clustering) does not handle overlapping spikes well . It assumes that the voltage signal is sparsely populated with rare events drawn from a small number of discrete waveforms. Two spikes that overlap in time produce an unusual waveform that cannot be categorized. Recently some methods have been developed that do not force these assumptions , . They explicitly model the recorded signal as an additive superposition of spikes and noise. The algorithm finds an efficient model that explains the signal by estimating both the spike waveform of each neuron and its associated set of spike times. These methods are well suited to the analysis of pooled recordings. Because the spike templates are obtained from split-mode recordings at the beginning of the experiment, they are less affected by noise than if one had to identify them de novo from the pooled recordings. Nonetheless, it probably pays to monitor the development of spike shapes during the pooled recording. If they drift too much, for example, because the electrode array moves in the brain , then a recalibration by another split-mode session may be in order (Fig. a). Alternatively, electrode drift may be corrected in real time if signals from neighboring electrodes are available , a criterion that may flow into the selection of switches for pooling. Chronically implanted electrode arrays can record for months on end , and the library of spike shapes can be updated continuously and scanned for new pooling opportunities. It should be clear that the proposed workflow relies heavily on automation by dedicated software. Of course, automation is already the rule with the large electrode arrays that include thousands of recording sites, and electrode pooling will require little more effort than conventional recording. Taking the newly announced Neuropixels 2.0 as a reference (5120 electrodes and 384 wires): Sampling for 5 min from each of the 13 groups of electrodes will take a bit over an hour. Spike-sorting of those signals will proceed in parallel with the sampling and require no additional time. Then the algorithm decides on the electrode pools, and the main recording session starts. Note that these same steps also apply in conventional recording: The user still has to choose 384 electrodes among the 5120 options, and will want to scan the whole array to see where the best signals are. The algorithms we advocate to steer electrode pooling will simply become part of the software suite that runs data acquisition.
A typical neuron may fire ~10 spikes/s on average . Each action potential lasts for ~1 ms. Therefore this neuron’s signal occupies <1% of the time axis in an extracellular recording (e.g., Fig. b). Sometimes additional neurons lie close enough to the same electrode to produce large spikes. That still leaves most of the time axis unused for signal transmission. Electrode pooling gives the experimenter the freedom to add more neurons to that signal by choosing other electrodes that carry large spikes. Eventually a limit will be reached when the spikes of different neurons collide and overlap in time so they can no longer be distinguished. These overlaps may be more common under conditions where neurons are synchronized to each other or to external events.
What signal actually results when one connects two electrodes to the same wire? Figure a shows an idealized circuit for a hypothetical electrode array that allows electrode pooling. Here the common wire is connected via programmable switches to two recording sites. At each site i , the extracellular signal of nearby neurons reaches the shared wire through a total electrode impedance Z i . This impedance has contributions from the metal/saline interface and the external electrolyte bath , , typically amounting to 100 kΩ–1 MΩ. By comparison, the CMOS switches have low impedance, typically ~100 Ω , which we will ignore. In general, one must also consider the shunt impedance Z S in parallel to the amplifier input. This can result from current leaks along the long wires as well as the internal input impedance of the amplifier. For well-designed systems, this shunt impedance should be much larger than the electrode impedances; for the Neuropixels device, we will show that the ratio is at least 100. Thus one can safely ignore it for the purpose of the present approximations. In that case, the circuit acts as a simple voltage divider between the impedances Z i . If a total of M electrodes are connected to the shared wire, the output voltage U is the average of the signals at the recording sites V i , weighted inversely by the electrode impedances, 1 [12pt]{minimal}
$$U=_{i=1}^{M}{c}_{i}{V}_{i}$$ U = ∑ i = 1 M c i V i where 2 [12pt]{minimal}
$${c}_{i}=_{i}}{_{j=1}^{M}1/{Z}_{j}}$$ c i = 1 / Z i ∑ j = 1 M 1 / Z j is defined as the pooling coefficient for electrode i . If all electrodes have the same size and surface coating, they will have the same impedance, and in that limit one expects the simple relationship 3 [12pt]{minimal}
$$U=_{i=1}^{M}{V}_{i}$$ U = 1 M ∑ i = 1 M V i Thus an action potential that appears on only one of the M electrodes will be attenuated in the pooled signal by a factor [12pt]{minimal}
$$$$ 1 M . In order to understand the trade-offs of this method, we must similarly account for the pooling of noise (Fig. a). There are three relevant sources of noise: (1) thermal (“Johnson”) noise from the impedance of the electrode; (2) biological noise (“hash”) from many distant neurons whose signals are too small to be resolved; (3) electronic noise resulting from the downstream acquisition system, including amplifier, multiplexer, and analog-to-digital converter. The thermal noise is private to each electrode, in the sense that it is statistically independent of the noise at another electrode. The biological noise is similar across neighboring electrodes that observe the same distant populations . For widely separated electrodes the hash will be independent and thus private to each electrode, although details depend on the neuronal geometries and the degree of synchronization of distant neurons . In that case the private noise is 4 [12pt]{minimal}
$${N}_{{{{{{{{}}}}}}},i}=_{{{{{{{{}}}}}}},i}^{2}+{N}_{{{{{{{{}}}}}}},i}^{2}}$$ N pri , i = N the , i 2 + N bio , i 2 because thermal noise and biological noise are additive and statistically independent. Finally the noise introduced by the amplifier and data acquisition is common to all the electrodes that share the same wire, 5 [12pt]{minimal}
$${N}_{{{{{{{{}}}}}}}}={N}_{{{{{{{{}}}}}}}}$$ N com = N amp In the course of pooling, the private noise gets attenuated by the pooling coefficient c i (Eq. and summed with contributions from other electrodes. Then the pooled private noise gets added to the common noise from data acquisition, which again is statistically independent of the other noise sources. With these assumptions the total noise at the output has RMS amplitude 6 [12pt]{minimal}
$${N}_{{{{{{{{}}}}}}}}=_{{{{{{{{}}}}}}}}^{2}+_{i=1}^{M}{c}_{i}^{2}{N}_{{{{{{{{}}}}}}}i}^{2}}$$ N tot = N com 2 + ∑ i = 1 M c i 2 N pri, i 2 If all electrodes have similar noise properties and impedances this simplifies to 7 [12pt]{minimal}
$${N}_{{{{{{{{}}}}}}}}=_{{{{{{{{}}}}}}}}^{2}+{N}_{{{{{{{{}}}}}}}}^{2}/M}$$ N tot = N com 2 + N pri 2 / M
Now we are in a position to estimate the benefits from electrode pooling. Suppose that the electrode array records neurons with a range of spike amplitudes: from the largest, with spike amplitude [12pt]{minimal}
$${S}_{ }$$ S max , to the smallest that can still be sorted reliably from the noise, with amplitude [12pt]{minimal}
$${S}_{ }$$ S min . To create the most favorable conditions for pooling one would select electrodes that each carry a single neuron, with spike amplitude [12pt]{minimal}
$$ {S}_{ }$$ ~ S max (Fig. c). As one adds more of these electrodes to the pool, there comes a point when the pooled signal is so attenuated that the spikes are no longer sortable from the noise. Pooling is beneficial as long as the signal-to-noise ratio of spikes in the pooled signal is larger than that of the smallest sortable spikes in a single-site recording, namely 8 [12pt]{minimal}
$$_{ }/M}{_{{{{{{{{}}}}}}}}^{2}+{N}_{{{{{{{{}}}}}}}}^{2}/M}} > _{ }}{_{{{{{{{{}}}}}}}}^{2}+{N}_{{{{{{{{}}}}}}}}^{2}}}$$ S max / M N com 2 + N pri 2 / M > S min N com 2 + N pri 2 This leads to a limit on the pool size M , 9 [12pt]{minimal}
$$M\, < \, {M}_{ }=^{2}}{2})}^{2}+(1+{ }^{2}){ }^{2}}-^{2}}{2}$$ M < M max = β 2 2 2 + ( 1 + β 2 ) α 2 − β 2 2 where 10 [12pt]{minimal}
$$ ={S}_{ }/{S}_{ },\ ={N}_{{{{{{{{}}}}}}}}/{N}_{{{{{{{{}}}}}}}}$$ α = S max / S min , β = N pri / N com If one pools more than [12pt]{minimal}
$${M}_{ }$$ M max electrodes all the neurons drop below the threshold for sorting. So the optimal pool size [12pt]{minimal}
$${M}_{ }$$ M max is also the largest achievable number of neurons per wire. This number depends on two parameters: the ratio of private to common noise, and the ratio of largest to smallest useful spike amplitudes (Eq. . These parameters vary across applications, because they depend on the target brain area, the recording hardware, and the spike-sorting software. In general, users can estimate the parameters α and β from their own experience with conventional recordings, and find [12pt]{minimal}
$${M}_{ }$$ M max from the graph in Fig. c. Next we consider a more generic situation, in which each electrode carries a range of spikes from different neurons (Fig. d). For simplicity, we assume a uniform distribution of spike amplitudes between 0 and [12pt]{minimal}
$${S}_{ }$$ S max . As more electrodes are added to the pool, all the spikes are attenuated, so the smallest action potentials drop below the sortable threshold [12pt]{minimal}
$${S}_{ }$$ S min . Beyond a certain optimal pool size, more spikes are lost in the noise than are added at the top of the distribution, and the total number of neurons decreases. By the same arguments used above one finds that the improvement in the number of sortable neurons, n M , relative to conventional split recording, n 1 , is 11 [12pt]{minimal}
$$_{M}}{{n}_{1}}=^{2}/M}{1+{ }^{2}}})}{ -1}$$ n M n 1 = M α − M 1 + β 2 / M 1 + β 2 α − 1 The optimal pool size M max is the M which maximizes that factor. The results are plotted in Fig. d. The benefits of pooling are quite substantial if the user can select electrodes that carry large spikes. For example under conditions of α and β that we have encountered in practice, Fig. c predicts that one can pool 8 electrodes and still resolve all the signals, thus increasing the neuron/wire ratio by a factor of 8. On the other extreme—with a uniform distribution of spike amplitudes—the optimal pool of 4 electrodes increases the neuron/wire ratio by a more modest but still respectable factor of 2.3 compared to conventional recording. The following section explains how one can maximize that yield.
With these insights about the constraints posed by signal and noise one can propose an overall workflow for experiments using electrode pooling (Fig. a). A key requirement is that the experimenter can control the switches that map electrodes to wires. This map should be adjusted to the unpredictable contingencies of any particular neural recording experiment. In fact, the experimenter will benefit from using different switch settings during the same session. A recording session begins with a short period of acquisition in “split mode” with only one electrode per wire. The purpose is to acquire samples of the spike waveforms from all neurons that might be recorded by the entire array. If the device has E electrodes and W wires, this sampling stage will require E / W segments of recording to cover all electrodes. For each segment, the switches are reset to select a different batch of electrodes. Each batch should cover a local group of electrodes, ensuring that the entire “footprint” of each neuron can be captured. During this sampling stage, the experimenter performs a quick analysis to extract the relevant data that will inform the pooling process. In particular, this yields a catalog of single neurons that can be extracted by spike-sorting. For each of those neurons, one has the spike waveform observed on each electrode. Finally, for every electrode one measures the total noise. The amplifier noise N amp and thermal noise N the can be assessed ahead of time, because they are a property of the recording system, and from them, one obtains the biological noise N bio . Now the experimenter has all the information needed to form useful electrode pools. Some principles one should consider in this process are: Pool electrodes that carry large signals. Electrodes with smaller signals can contribute to smaller pools. Pool electrodes with distinct spike waveforms. Pool distant electrodes that don’t share the same hash noise. Don’t pool electrodes that carry dense signals with high firing rates. After allocating the available wires to effective electrode pools one begins the main recording session in pooled mode. Ideally this phase captures all neurons with spike signals that are within reach of the electrode array. In analyzing these recordings the goal is to detect spikes in the pooled signals and assign each spike correctly to its electrode of origin. This can be achieved by using the split-mode recordings from the early sampling stage of the experiment. From the spike waveforms obtained in split-mode, one can predict how the corresponding spike appears in the pooled signals. Here it helps to know all the electrode impedances Z i so the weighted mix can be computed accurately (Eq. . This prediction serves as a search template for spike-sorting the pooled recording. By its very nature electrode pooling produces a dense neural signal with more instances of temporal overlap between spikes than the typical split-mode recording. This places special demands on the methods for spike detection and sorting. The conventional cluster-based algorithm (peak detection–temporal alignment–PCA–clustering) does not handle overlapping spikes well . It assumes that the voltage signal is sparsely populated with rare events drawn from a small number of discrete waveforms. Two spikes that overlap in time produce an unusual waveform that cannot be categorized. Recently some methods have been developed that do not force these assumptions , . They explicitly model the recorded signal as an additive superposition of spikes and noise. The algorithm finds an efficient model that explains the signal by estimating both the spike waveform of each neuron and its associated set of spike times. These methods are well suited to the analysis of pooled recordings. Because the spike templates are obtained from split-mode recordings at the beginning of the experiment, they are less affected by noise than if one had to identify them de novo from the pooled recordings. Nonetheless, it probably pays to monitor the development of spike shapes during the pooled recording. If they drift too much, for example, because the electrode array moves in the brain , then a recalibration by another split-mode session may be in order (Fig. a). Alternatively, electrode drift may be corrected in real time if signals from neighboring electrodes are available , a criterion that may flow into the selection of switches for pooling. Chronically implanted electrode arrays can record for months on end , and the library of spike shapes can be updated continuously and scanned for new pooling opportunities. It should be clear that the proposed workflow relies heavily on automation by dedicated software. Of course, automation is already the rule with the large electrode arrays that include thousands of recording sites, and electrode pooling will require little more effort than conventional recording. Taking the newly announced Neuropixels 2.0 as a reference (5120 electrodes and 384 wires): Sampling for 5 min from each of the 13 groups of electrodes will take a bit over an hour. Spike-sorting of those signals will proceed in parallel with the sampling and require no additional time. Then the algorithm decides on the electrode pools, and the main recording session starts. Note that these same steps also apply in conventional recording: The user still has to choose 384 electrodes among the 5120 options, and will want to scan the whole array to see where the best signals are. The algorithms we advocate to steer electrode pooling will simply become part of the software suite that runs data acquisition.
Pooling characteristics of the Neuropixels 1.0 array To test the biophysical assumptions underlying electrode pooling, we used the Neuropixels probe version 1.0 , . This electrode array has a single silicon shank with 960 recording sites that can be connected to 384 wires via controllable switches (Fig. b). The electrodes are divided into three banks (called Bank 0, Bank 1, and Bank 2 from the tip to the base of the shank). In the present study, only Banks 0 and 1 were used. Banks 0 and 1 each contain 383 recording sites (one channel is used for a reference signal). Each site has a dedicated switch by which it may connect to an adjacent wire. Sites at the same relative location in a bank share the same wire. These two electrodes are separated by 3.84 mm along the shank. Under the conventional operation of Neuropixels , each wire connects to only one site at a time. However, with modifications of the firmware on the device and the user interface we engineered independent control of all the switches. This enabled a limited version of electrode pooling across Banks 0 and 1. We set out to measure those electronic properties of the device that affect the efficacy of pooling, specifically the split of the noise signal into common amplifier noise N amp (Eq. and private thermal noise N the (Eq. , as well as the pooling coefficients c i (Eq. . These parameters are not important for conventional recording, and thus are not quoted in the Neuropixels user manual, but they can be derived from measurements performed in a saline bath (see ). On a pristine unused probe, the pooling coefficients c 0 and c 1 for almost all sites were close to 0.5 (Fig. a), as expected from the idealized circuit (Fig. a) if the electrode impedances are all equal (Eq. . Correspondingly the thermal noise was almost identical on all electrodes, with an RMS value of 1.45 ± 0.10 μV (Fig. b). The amplifier noise N amp exceeded the thermal noise substantially, amounting to 5.7 μV RMS on average, and more than 12 μV for a few of the wires (Fig. c). Because this noise source is shared across electrodes on the same wire, it lowers β in Eq. and can significantly affect electrode pooling. Neural recording Based on this electronic characterization of the Neuropixels probe we proceeded to test electrode pooling in vivo. Recall that each bank of electrodes extends over 3.84 mm of the shank, and one needs to implant more than one bank into the brain to accomplish any electrode pooling. Clearly, the opportunities for pooling on this device are limited; nonetheless, it serves as a useful testing ground for the method. In the pilot experiment analyzed here, the probe was inserted into the brain of a head-fixed, awake mouse to a depth of ~6 mm. This involved all of Bank 0 and roughly half of Bank 1, and covered numerous brain areas from the medial preoptic area at the bottom to the retrosplenial cortex at the top. Following the work flow proposed in Fig. , we then recorded for ~10 min each from Bank 0 and Bank 1 in split mode, followed by ~10 min of recording from both banks simultaneously in pooled mode. Unmixing a pooled recording As proposed above, one can unmix the pooled recording by matching its action potentials to the spike waveforms sampled separately on each of the two banks (Fig. b). Each of the three recordings (split Bank 0, split Bank 1, and pooled Banks 0 + 1) was spike-sorted to isolate single units. Then we paired each split-mode unit with the pooled-mode unit that had the most similar waveform, based on the cosine similarity of their waveform vectors (Eq. , Fig. b). In most cases, the match was unambiguous even when multiple units were present in the two banks with similar electrode footprints (Fig. a). The matching algorithm proceeded iteratively until the similarity score for the best match dropped below 0.9 (Fig. b). We corroborated the resulting matches by comparing other statistics of the identified units, such as the mean firing rate and inter-spike-interval distribution (Fig. a). When spike-sorting the pooled-mode recording there is of course a strong expectation for what the spike waveforms will be, namely a scaled version of spikes from the two split-mode recordings. This suggests that one might jump-start the sorting of the pooled signal by building in the prior knowledge from sorting the split-mode recordings. Any such regularization could be beneficial, not only to accelerate the process but to compensate for the lower SNR in the pooled signal. We explored this possibility by running the template-matching function of KiloSort2 on the pooled-mode recording with templates from split-mode recordings (“hot sorting”). Then we compared this method to two other procedures (Fig. c): (1) sorting each recording separately, using KiloSort1 with manual curation (“manual”), and (2) sorting each recording separately using KiloSort2 with no manual intervention (“cold sorting”). Figure c illustrates what fraction of the units identified in both split mode recordings combined were recovered from the pooled recording, and how that fraction depends on the spike amplitude. First, this shows that hot sorting significantly outperforms cold sorting, and in fact rivals the performance of manually curated spike sorting. This is important, because manual sorting by a human operator will be unrealistic for the high-count electrode arrays in which electrode pooling may be applied. Second, one sees that the fraction of spikes recovered from the combined split recordings exceeds 0.5 even at moderate spike amplitudes of 100 μV. For spikes of that amplitude and above the pooled recording will contain more neurons than the average split recording. Clearly, electrode pooling is not restricted to the largest spikes in the distribution, but can be considered for moderate spike amplitudes as well. Recall that the Neuropixels 1.0 probe is not optimized for electrode pooling, in that it has a fixed switching matrix, and only 2 banks of electrodes fit in the mouse brain. Thus our pilot experiments were limited to brute-force pooling the two banks site-for-site without regard to the design principles for electrode pools. Nonetheless, the “hot sorting” method recovered more neurons from the pooled recording (184) than on average over the two split recordings (166). We conservatively focused this assessment only on units identified in the split recordings, ignoring any unmatched units that appeared in the pooled recording. This validates the basic premise of electrode pooling even under the highly constrained conditions. Overall, the above sequence of operations demonstrates that a pooled-mode recording can be productively unmixed into the constituent signals, and the resulting units assigned to their locations along the multi-electrode shank. Pooling of signal and noise in vivo Closer analysis of the spike waveforms from split and pooled recordings allowed an assessment of the pooling coefficients in vivo. When spikes are present on the corresponding electrodes in both banks (as in Fig. a) one can measure the pooling coefficients c 0 and c 1 of Eq. . Unexpectedly, instead of clustering near 0.5, these pooling coefficients varied over a wide range (Fig. e), at least by a factor of 3. The two banks had systematically different pooling coefficients, suggesting that the impedance was lower for electrodes near the tip of the array. Following this in vivo recording we cleaned the electrode array by the recommended protocol (tergazyme/water) and then measured the pooling coefficients in saline. Again the pooling coefficients varied considerably across electrodes, although somewhat less than observed in vivo (Fig. e). Also the bath resistance of the electrodes was larger on average than on an unused probe (30 kΩ as opposed to 13 kΩ). This change may result from the interactions within brain tissue. For example, some material may bind to the electrode surface and thus raise its bath resistance. This would lower the pooling coefficient of the affected electrode and raise that of its partners. Because the thermal noise is never limiting (Fig. b–d), such a change would easily go unnoticed in conventional single-site recording. The precise reason for the use-dependent impedance remains to be understood. To measure the contributions of biological noise in vivo we removed from the recorded traces all the detected spikes and analyzed the remaining waveforms. After subtracting (in quadrature) the known thermal and electrical noise at each site (Fig. b, c) one obtains the biological noise N bio . This noise source substantially exceeded both the thermal and amplifier noise (Fig. f). It also showed different amplitude on the two banks, presumably owing to differences between brain areas 3.84 mm apart. Given this large distance between electrodes in the two banks, one expects the biological noise to be statistically independent between the two sites, because neurons near one electrode will be out of reach of the other. To verify this in the present recordings we measured the biological noise in the pooled condition and compared the result to the prediction from the two split recordings, assuming that the noise was private to each site. Indeed the noise in the pooled signal was largely consistent with the assumption of independent noise (Fig. f). It seems likely that the 1-cm shank length on these and similar array devices suffices for finding electrodes that carry independent biological noise.
To test the biophysical assumptions underlying electrode pooling, we used the Neuropixels probe version 1.0 , . This electrode array has a single silicon shank with 960 recording sites that can be connected to 384 wires via controllable switches (Fig. b). The electrodes are divided into three banks (called Bank 0, Bank 1, and Bank 2 from the tip to the base of the shank). In the present study, only Banks 0 and 1 were used. Banks 0 and 1 each contain 383 recording sites (one channel is used for a reference signal). Each site has a dedicated switch by which it may connect to an adjacent wire. Sites at the same relative location in a bank share the same wire. These two electrodes are separated by 3.84 mm along the shank. Under the conventional operation of Neuropixels , each wire connects to only one site at a time. However, with modifications of the firmware on the device and the user interface we engineered independent control of all the switches. This enabled a limited version of electrode pooling across Banks 0 and 1. We set out to measure those electronic properties of the device that affect the efficacy of pooling, specifically the split of the noise signal into common amplifier noise N amp (Eq. and private thermal noise N the (Eq. , as well as the pooling coefficients c i (Eq. . These parameters are not important for conventional recording, and thus are not quoted in the Neuropixels user manual, but they can be derived from measurements performed in a saline bath (see ). On a pristine unused probe, the pooling coefficients c 0 and c 1 for almost all sites were close to 0.5 (Fig. a), as expected from the idealized circuit (Fig. a) if the electrode impedances are all equal (Eq. . Correspondingly the thermal noise was almost identical on all electrodes, with an RMS value of 1.45 ± 0.10 μV (Fig. b). The amplifier noise N amp exceeded the thermal noise substantially, amounting to 5.7 μV RMS on average, and more than 12 μV for a few of the wires (Fig. c). Because this noise source is shared across electrodes on the same wire, it lowers β in Eq. and can significantly affect electrode pooling.
Based on this electronic characterization of the Neuropixels probe we proceeded to test electrode pooling in vivo. Recall that each bank of electrodes extends over 3.84 mm of the shank, and one needs to implant more than one bank into the brain to accomplish any electrode pooling. Clearly, the opportunities for pooling on this device are limited; nonetheless, it serves as a useful testing ground for the method. In the pilot experiment analyzed here, the probe was inserted into the brain of a head-fixed, awake mouse to a depth of ~6 mm. This involved all of Bank 0 and roughly half of Bank 1, and covered numerous brain areas from the medial preoptic area at the bottom to the retrosplenial cortex at the top. Following the work flow proposed in Fig. , we then recorded for ~10 min each from Bank 0 and Bank 1 in split mode, followed by ~10 min of recording from both banks simultaneously in pooled mode.
As proposed above, one can unmix the pooled recording by matching its action potentials to the spike waveforms sampled separately on each of the two banks (Fig. b). Each of the three recordings (split Bank 0, split Bank 1, and pooled Banks 0 + 1) was spike-sorted to isolate single units. Then we paired each split-mode unit with the pooled-mode unit that had the most similar waveform, based on the cosine similarity of their waveform vectors (Eq. , Fig. b). In most cases, the match was unambiguous even when multiple units were present in the two banks with similar electrode footprints (Fig. a). The matching algorithm proceeded iteratively until the similarity score for the best match dropped below 0.9 (Fig. b). We corroborated the resulting matches by comparing other statistics of the identified units, such as the mean firing rate and inter-spike-interval distribution (Fig. a). When spike-sorting the pooled-mode recording there is of course a strong expectation for what the spike waveforms will be, namely a scaled version of spikes from the two split-mode recordings. This suggests that one might jump-start the sorting of the pooled signal by building in the prior knowledge from sorting the split-mode recordings. Any such regularization could be beneficial, not only to accelerate the process but to compensate for the lower SNR in the pooled signal. We explored this possibility by running the template-matching function of KiloSort2 on the pooled-mode recording with templates from split-mode recordings (“hot sorting”). Then we compared this method to two other procedures (Fig. c): (1) sorting each recording separately, using KiloSort1 with manual curation (“manual”), and (2) sorting each recording separately using KiloSort2 with no manual intervention (“cold sorting”). Figure c illustrates what fraction of the units identified in both split mode recordings combined were recovered from the pooled recording, and how that fraction depends on the spike amplitude. First, this shows that hot sorting significantly outperforms cold sorting, and in fact rivals the performance of manually curated spike sorting. This is important, because manual sorting by a human operator will be unrealistic for the high-count electrode arrays in which electrode pooling may be applied. Second, one sees that the fraction of spikes recovered from the combined split recordings exceeds 0.5 even at moderate spike amplitudes of 100 μV. For spikes of that amplitude and above the pooled recording will contain more neurons than the average split recording. Clearly, electrode pooling is not restricted to the largest spikes in the distribution, but can be considered for moderate spike amplitudes as well. Recall that the Neuropixels 1.0 probe is not optimized for electrode pooling, in that it has a fixed switching matrix, and only 2 banks of electrodes fit in the mouse brain. Thus our pilot experiments were limited to brute-force pooling the two banks site-for-site without regard to the design principles for electrode pools. Nonetheless, the “hot sorting” method recovered more neurons from the pooled recording (184) than on average over the two split recordings (166). We conservatively focused this assessment only on units identified in the split recordings, ignoring any unmatched units that appeared in the pooled recording. This validates the basic premise of electrode pooling even under the highly constrained conditions. Overall, the above sequence of operations demonstrates that a pooled-mode recording can be productively unmixed into the constituent signals, and the resulting units assigned to their locations along the multi-electrode shank.
Closer analysis of the spike waveforms from split and pooled recordings allowed an assessment of the pooling coefficients in vivo. When spikes are present on the corresponding electrodes in both banks (as in Fig. a) one can measure the pooling coefficients c 0 and c 1 of Eq. . Unexpectedly, instead of clustering near 0.5, these pooling coefficients varied over a wide range (Fig. e), at least by a factor of 3. The two banks had systematically different pooling coefficients, suggesting that the impedance was lower for electrodes near the tip of the array. Following this in vivo recording we cleaned the electrode array by the recommended protocol (tergazyme/water) and then measured the pooling coefficients in saline. Again the pooling coefficients varied considerably across electrodes, although somewhat less than observed in vivo (Fig. e). Also the bath resistance of the electrodes was larger on average than on an unused probe (30 kΩ as opposed to 13 kΩ). This change may result from the interactions within brain tissue. For example, some material may bind to the electrode surface and thus raise its bath resistance. This would lower the pooling coefficient of the affected electrode and raise that of its partners. Because the thermal noise is never limiting (Fig. b–d), such a change would easily go unnoticed in conventional single-site recording. The precise reason for the use-dependent impedance remains to be understood. To measure the contributions of biological noise in vivo we removed from the recorded traces all the detected spikes and analyzed the remaining waveforms. After subtracting (in quadrature) the known thermal and electrical noise at each site (Fig. b, c) one obtains the biological noise N bio . This noise source substantially exceeded both the thermal and amplifier noise (Fig. f). It also showed different amplitude on the two banks, presumably owing to differences between brain areas 3.84 mm apart. Given this large distance between electrodes in the two banks, one expects the biological noise to be statistically independent between the two sites, because neurons near one electrode will be out of reach of the other. To verify this in the present recordings we measured the biological noise in the pooled condition and compared the result to the prediction from the two split recordings, assuming that the noise was private to each site. Indeed the noise in the pooled signal was largely consistent with the assumption of independent noise (Fig. f). It seems likely that the 1-cm shank length on these and similar array devices suffices for finding electrodes that carry independent biological noise.
How many electrodes could experimenters pool and still sort every neuron with high accuracy? Earlier we had derived a theoretical limit to electrode pooling based solely on the signal and noise amplitudes (Fig. ). To explore what additional limitations might arise based on the density of spikes in time and the needs of spike sorting we performed a limited simulation of the process (Fig. a). We simulated units with an extracellular footprint extending over 4 neighboring electrodes, and then pooled various such tetrodes into a single 4-channel recording. These pooled signals were then spike-sorted and the resulting spike trains compared to the known ground-truth spike times, applying a popular metric of accuracy . This revealed how many neurons can be reliably recovered depending on the degree of electrode pooling (Fig. b). Then we evaluated the effects of various parameters of the simulation, such as the amplitude of the largest spikes, the biological noise, and the average firing rate. For simplicity we focused on the favorable scenario of Fig. c: It presumes that the experimenter can choose for pooling a set of tetrodes that each carry a single unit plus noise. The curves of recovered units vs pool size have an inverted-U shape (Fig. b). For small electrode pools, one can reliably recover all the units. Eventually, however, some of the units drop out, and for a large pool size, all the recovered units fall below the desired quality threshold. We will call the pool that produces the largest number of recovered units the “optimal pool”. For the “standard” condition of simulations, we chose a reasonably large spike amplitude of 380 μV peak-to-peak (the 90th percentile in a database of recordings by the Allen Institute ), a firing rate of 10 Hz, and all the noise values as determined experimentally from the Neuropixels 1.0 device (Fig. ). Under these conditions, one can pool up to 5 electrodes per wire and still recover all 5 of the units reliably (Fig. b). This optimal pool size is sensitive to the amplitude of the spikes: If the spike amplitude is reduced by a factor of 2, the optimal pool drops from 5 to 3 electrodes. Similarly, if the biological noise increases to 15 μV, the optimal pool is reduced to 4 electrodes. This indicates that the recovery of the units from the pooled signal is strongly determined by the available signal-to-noise ratio at each electrode. By contrast, increasing the firing rate two-fold to 20 Hz did not change the optimal pool from 5. Thus the temporal overlap of spikes is not yet a serious constraint. Looking to the future, if the amplifier noise on each wire could be reduced by a factor of 2 the optimal pool would expand significantly from 5 to 7 electrodes or more (Fig. b). How do these practical results relate to the theoretical bounds of Fig. ? Recall that this bound depends on the noise properties, but also on the ratio of largest to smallest sortable spikes. In our “standard” simulation with a pool size of 1 (split mode) we found that the smallest sortable spikes had an amplitude of 75 μV. This also corresponds to the low end of sorted spikes reported by the Allen Institute (10th percentile ). With these bounds on large and small spikes, and the measured values of private and common noise, one obtains α = 5.1 and β = 1.6 in Eq. , which predicts an optimal pool of [12pt]{minimal}
$${M}_{ }$$ M max = 8 (Fig. c), compared to the actually observed value of 5. The simple theory based purely on signal and noise amplitude give a useful estimate, but additional practical constraints that arise from temporal processing and spike-sorting lower the yield somewhat from there. In summary, under favorable conditions where the experimenter can select electrodes, the pooling method may increase the number of units recorded per wire by a factor of 5. Even for significantly smaller spikes or higher biological noise one can expect a factor of 3. And with future technical improvements a factor of 7 or more is plausible.
Summary of results This work presents the concept of electrode pooling as a way to multiply the yield of large electrode arrays. We show how the signals from many recording sites can be combined onto a small number of wires, and then recovered by a combination of experimental strategy and spike-sorting software. The reduced requirement for wires coursing through the brain will lead to slender array devices that cause less damage to the neurons they are meant to observe. We developed the theory behind electrode pooling, analyzed the trade-offs of the approach, derived a mathematical limit to pooling, and developed a recipe for experiment and analysis that implements the procedure (Figs. , ). We also verified the basic assumptions about signal mixing and unmixing using a real existing device: the Neuropixels 1.0 probe (Figs. , ). We showed that signals from different neurons can be reliably disambiguated and assigned back to the electrodes of origin. For the optimal design of electrode pools and to analyze the resulting data, it is advantageous to gather precise information about the impedance and noise properties of the device. In simulations, we showed that with a proper selection of electrodes based on the signals they carry, the method could improve the yield of neurons per wire by a factor of 3–7 (Fig. ). Electrode pooling is categorically different from most data compression schemes that have been proposed for neural recording systems – . In many of those applications, the goal is to reduce the bit rate for data transmission out of the brain, for example using a wireless link. By contrast, electrode pooling seeks to minimize the number of electrode wires one needs to stick into the brain to sample the neural signals, thus minimizing biological damage to the system under study. By itself, that doesn’t reduce the bit rate, although it produces denser time series. For the optimal wireless recording system, both objectives—lower wire volume at the input and lower data volume at the output —should be combined, and their implementations are fully independent. Future developments Hardware The ability to service multiple recording sites with a single wire opens the door for much larger electrode arrays that nevertheless maintain a slim form factor and don’t require any onboard signal processing. On the commercially available Neuropixels 1.0 device the ratio of electrodes to wires is only 2.5, and thus there is little practical benefit to be gained from electrode pooling. In most circumstances, the user can probably use static selection to pick 40% of the electrodes and still monitor every possible neuron. By contrast, the recently announced Neuropixels 2.0 array has an electrode:wire ratio of 13.3. Another device, currently in engineering test, will have 4416 sites on a single 45 mm shank, with electrode:wire ratio of 11.5. For the Neuropixels technology, the number of sites can grow with shank count and shank length while channel count is limited by base area and trace crowding on the shank. These new probes already offer substantial opportunities to pool electrodes. Indeed, Steinmetz et al. report an example of pooling two-electrode banks, although their approach to unmixing the signals differs from that advocated here. The design of effective electrode pools requires some flexibility in how recording sites are connected to wires. In the current Neuropixels technology, each electrode has only one associated wire, which constrains the choice of electrode pools. The CMOS switch itself is small, but the local memory to store the switch state occupies some silicon space . Nonetheless one can implement 3 switches per electrode even on a very tight pitch . When arranged in a hierarchical network these switches could effect a rich diversity of pooling schemes adapted to the specifics of any given experiment (Fig. ). For example, one could route any one electrode among a group of four to any one of three wires with two 1:4 switches (Fig. c). This requires just 1 bit of storage per electrode, as in the current Neuropixels probe . Another hardware design feature that could greatly increase the capacity for electrode pooling: An optional analog inverter at each electrode (Fig. d). This is a simple CMOS circuit that changes the sign of the waveform depending on a local switch setting. If half of the electrodes in a pool use the inverter, that helps to differentiate the spike shapes of different neurons. Because extracellular signals from cell bodies generally start with a negative voltage swing, this effectively doubles the space of waveforms that occur in the pooled signal. In turn, this would aid the spike-sorting analysis, ultimately allowing even more electrodes to share the same wire. Of course each of these proposals comes with some cost, such as greater power use or added space required for digital logic. The overall design of a probe must take all these trade-offs into account. The several-fold gain in recording efficiency promised by electrode pooling should act as a driver in favor of fully programmable switches, but deciding on the optimal design will benefit from the close interaction between users and manufacturers. Software Electrode pooling will also benefit from further developments in spike-sorting algorithms. For example, a promising strategy is to acquire all the spike shapes present on the electrode array using split-mode recordings, compute the expected pooled-mode waveforms, and use those as templates in sorting the pooled signals. We have implemented this so-called “hot sorting” method in KiloSort2 and have shown that it can greatly increase the number of split-mode cells recovered in the pooled recordings (Fig. c). This idea may also be extended to cluster-based sorting algorithms, by guiding the initialization of the clustering step. Indeed, knowing ahead of time which waveforms to look for in the recording would help any spike-sorter. We expect this method will also improve the resolution of temporally overlapping spike waveforms. As one envisions experiments with 10,000 or more recording sites, it becomes imperative to automate the optimal design of electrode pools, so that the user wastes no time before launching into pooled recording (Fig. ). The pooling strategy can be adapted flexibly to the statistics of the available neural signals, even varying along the silicon shank if it passes through different brain areas. The user always has the option of recording select sites in conventional mode; for example, this might serve to sample local field potentials at a sparse set of locations. Designing an effective algorithm that recommends and implements the electrode switching based on user goals will be an interesting challenge. High-impact applications Finally, we believe that the flexible pooling strategy will be particularly attractive in chronic studies, where an electrode array remains implanted for months or years. In these situations, maintaining an updated library of signal waveforms is an intrinsic part of any recording strategy. Round-the-clock recording serves to populate and refine the library, enabling the design of precise spike templates, and effective separation of pooled signals. The library keeps updating in response to any slow changes in recording geometry that may take place. A second important application for pooling arises in the context of sub-dural implants in humans. Here the sub-dural space forces a low-profile chip with minimal volume for electronic circuitry, whereas one can envision several slender penetrating electrode shafts with thousands of recording sites. We estimate that some devices that are now plausible (no published examples yet) will have an electrode-to-channel ratio near 25. Clearly one will want to record from more than 1/25 of the available sites, and electrode pooling achieves it without increased demand on electronic circuitry. In summary, while the devices to maximize pooling benefits are not yet available, they soon may be. Consideration of pooling options would benefit the designers and users of these devices. The advantage of pooling grows naturally as the same tissue is recorded across sessions or time. The calculations and demonstrations reported here are intended to inspire professional simulations and the design of future devices for a variety of applications, including human implants.
This work presents the concept of electrode pooling as a way to multiply the yield of large electrode arrays. We show how the signals from many recording sites can be combined onto a small number of wires, and then recovered by a combination of experimental strategy and spike-sorting software. The reduced requirement for wires coursing through the brain will lead to slender array devices that cause less damage to the neurons they are meant to observe. We developed the theory behind electrode pooling, analyzed the trade-offs of the approach, derived a mathematical limit to pooling, and developed a recipe for experiment and analysis that implements the procedure (Figs. , ). We also verified the basic assumptions about signal mixing and unmixing using a real existing device: the Neuropixels 1.0 probe (Figs. , ). We showed that signals from different neurons can be reliably disambiguated and assigned back to the electrodes of origin. For the optimal design of electrode pools and to analyze the resulting data, it is advantageous to gather precise information about the impedance and noise properties of the device. In simulations, we showed that with a proper selection of electrodes based on the signals they carry, the method could improve the yield of neurons per wire by a factor of 3–7 (Fig. ). Electrode pooling is categorically different from most data compression schemes that have been proposed for neural recording systems – . In many of those applications, the goal is to reduce the bit rate for data transmission out of the brain, for example using a wireless link. By contrast, electrode pooling seeks to minimize the number of electrode wires one needs to stick into the brain to sample the neural signals, thus minimizing biological damage to the system under study. By itself, that doesn’t reduce the bit rate, although it produces denser time series. For the optimal wireless recording system, both objectives—lower wire volume at the input and lower data volume at the output —should be combined, and their implementations are fully independent.
Hardware The ability to service multiple recording sites with a single wire opens the door for much larger electrode arrays that nevertheless maintain a slim form factor and don’t require any onboard signal processing. On the commercially available Neuropixels 1.0 device the ratio of electrodes to wires is only 2.5, and thus there is little practical benefit to be gained from electrode pooling. In most circumstances, the user can probably use static selection to pick 40% of the electrodes and still monitor every possible neuron. By contrast, the recently announced Neuropixels 2.0 array has an electrode:wire ratio of 13.3. Another device, currently in engineering test, will have 4416 sites on a single 45 mm shank, with electrode:wire ratio of 11.5. For the Neuropixels technology, the number of sites can grow with shank count and shank length while channel count is limited by base area and trace crowding on the shank. These new probes already offer substantial opportunities to pool electrodes. Indeed, Steinmetz et al. report an example of pooling two-electrode banks, although their approach to unmixing the signals differs from that advocated here. The design of effective electrode pools requires some flexibility in how recording sites are connected to wires. In the current Neuropixels technology, each electrode has only one associated wire, which constrains the choice of electrode pools. The CMOS switch itself is small, but the local memory to store the switch state occupies some silicon space . Nonetheless one can implement 3 switches per electrode even on a very tight pitch . When arranged in a hierarchical network these switches could effect a rich diversity of pooling schemes adapted to the specifics of any given experiment (Fig. ). For example, one could route any one electrode among a group of four to any one of three wires with two 1:4 switches (Fig. c). This requires just 1 bit of storage per electrode, as in the current Neuropixels probe . Another hardware design feature that could greatly increase the capacity for electrode pooling: An optional analog inverter at each electrode (Fig. d). This is a simple CMOS circuit that changes the sign of the waveform depending on a local switch setting. If half of the electrodes in a pool use the inverter, that helps to differentiate the spike shapes of different neurons. Because extracellular signals from cell bodies generally start with a negative voltage swing, this effectively doubles the space of waveforms that occur in the pooled signal. In turn, this would aid the spike-sorting analysis, ultimately allowing even more electrodes to share the same wire. Of course each of these proposals comes with some cost, such as greater power use or added space required for digital logic. The overall design of a probe must take all these trade-offs into account. The several-fold gain in recording efficiency promised by electrode pooling should act as a driver in favor of fully programmable switches, but deciding on the optimal design will benefit from the close interaction between users and manufacturers. Software Electrode pooling will also benefit from further developments in spike-sorting algorithms. For example, a promising strategy is to acquire all the spike shapes present on the electrode array using split-mode recordings, compute the expected pooled-mode waveforms, and use those as templates in sorting the pooled signals. We have implemented this so-called “hot sorting” method in KiloSort2 and have shown that it can greatly increase the number of split-mode cells recovered in the pooled recordings (Fig. c). This idea may also be extended to cluster-based sorting algorithms, by guiding the initialization of the clustering step. Indeed, knowing ahead of time which waveforms to look for in the recording would help any spike-sorter. We expect this method will also improve the resolution of temporally overlapping spike waveforms. As one envisions experiments with 10,000 or more recording sites, it becomes imperative to automate the optimal design of electrode pools, so that the user wastes no time before launching into pooled recording (Fig. ). The pooling strategy can be adapted flexibly to the statistics of the available neural signals, even varying along the silicon shank if it passes through different brain areas. The user always has the option of recording select sites in conventional mode; for example, this might serve to sample local field potentials at a sparse set of locations. Designing an effective algorithm that recommends and implements the electrode switching based on user goals will be an interesting challenge.
The ability to service multiple recording sites with a single wire opens the door for much larger electrode arrays that nevertheless maintain a slim form factor and don’t require any onboard signal processing. On the commercially available Neuropixels 1.0 device the ratio of electrodes to wires is only 2.5, and thus there is little practical benefit to be gained from electrode pooling. In most circumstances, the user can probably use static selection to pick 40% of the electrodes and still monitor every possible neuron. By contrast, the recently announced Neuropixels 2.0 array has an electrode:wire ratio of 13.3. Another device, currently in engineering test, will have 4416 sites on a single 45 mm shank, with electrode:wire ratio of 11.5. For the Neuropixels technology, the number of sites can grow with shank count and shank length while channel count is limited by base area and trace crowding on the shank. These new probes already offer substantial opportunities to pool electrodes. Indeed, Steinmetz et al. report an example of pooling two-electrode banks, although their approach to unmixing the signals differs from that advocated here. The design of effective electrode pools requires some flexibility in how recording sites are connected to wires. In the current Neuropixels technology, each electrode has only one associated wire, which constrains the choice of electrode pools. The CMOS switch itself is small, but the local memory to store the switch state occupies some silicon space . Nonetheless one can implement 3 switches per electrode even on a very tight pitch . When arranged in a hierarchical network these switches could effect a rich diversity of pooling schemes adapted to the specifics of any given experiment (Fig. ). For example, one could route any one electrode among a group of four to any one of three wires with two 1:4 switches (Fig. c). This requires just 1 bit of storage per electrode, as in the current Neuropixels probe . Another hardware design feature that could greatly increase the capacity for electrode pooling: An optional analog inverter at each electrode (Fig. d). This is a simple CMOS circuit that changes the sign of the waveform depending on a local switch setting. If half of the electrodes in a pool use the inverter, that helps to differentiate the spike shapes of different neurons. Because extracellular signals from cell bodies generally start with a negative voltage swing, this effectively doubles the space of waveforms that occur in the pooled signal. In turn, this would aid the spike-sorting analysis, ultimately allowing even more electrodes to share the same wire. Of course each of these proposals comes with some cost, such as greater power use or added space required for digital logic. The overall design of a probe must take all these trade-offs into account. The several-fold gain in recording efficiency promised by electrode pooling should act as a driver in favor of fully programmable switches, but deciding on the optimal design will benefit from the close interaction between users and manufacturers.
Electrode pooling will also benefit from further developments in spike-sorting algorithms. For example, a promising strategy is to acquire all the spike shapes present on the electrode array using split-mode recordings, compute the expected pooled-mode waveforms, and use those as templates in sorting the pooled signals. We have implemented this so-called “hot sorting” method in KiloSort2 and have shown that it can greatly increase the number of split-mode cells recovered in the pooled recordings (Fig. c). This idea may also be extended to cluster-based sorting algorithms, by guiding the initialization of the clustering step. Indeed, knowing ahead of time which waveforms to look for in the recording would help any spike-sorter. We expect this method will also improve the resolution of temporally overlapping spike waveforms. As one envisions experiments with 10,000 or more recording sites, it becomes imperative to automate the optimal design of electrode pools, so that the user wastes no time before launching into pooled recording (Fig. ). The pooling strategy can be adapted flexibly to the statistics of the available neural signals, even varying along the silicon shank if it passes through different brain areas. The user always has the option of recording select sites in conventional mode; for example, this might serve to sample local field potentials at a sparse set of locations. Designing an effective algorithm that recommends and implements the electrode switching based on user goals will be an interesting challenge.
Finally, we believe that the flexible pooling strategy will be particularly attractive in chronic studies, where an electrode array remains implanted for months or years. In these situations, maintaining an updated library of signal waveforms is an intrinsic part of any recording strategy. Round-the-clock recording serves to populate and refine the library, enabling the design of precise spike templates, and effective separation of pooled signals. The library keeps updating in response to any slow changes in recording geometry that may take place. A second important application for pooling arises in the context of sub-dural implants in humans. Here the sub-dural space forces a low-profile chip with minimal volume for electronic circuitry, whereas one can envision several slender penetrating electrode shafts with thousands of recording sites. We estimate that some devices that are now plausible (no published examples yet) will have an electrode-to-channel ratio near 25. Clearly one will want to record from more than 1/25 of the available sites, and electrode pooling achieves it without increased demand on electronic circuitry. In summary, while the devices to maximize pooling benefits are not yet available, they soon may be. Consideration of pooling options would benefit the designers and users of these devices. The advantage of pooling grows naturally as the same tissue is recorded across sessions or time. The calculations and demonstrations reported here are intended to inspire professional simulations and the design of future devices for a variety of applications, including human implants.
All analysis was performed with Matlab R2016b (Mathworks) and Python 3. All the quoted uncertainties are standard deviations. Control of Neuropixels switching circuitry The Neuropixels 1.0 probe has 960 recording sites that can be connected to 384 wires via controllable switches. The conventional mode of operation (split mode) was to connect one electrode to one wire at a time. Electrode pooling was implemented by modifying the Neuropixels API and the GUI software SpikeGLX to allow connecting up to three electrodes to each readout wire. Neuropixels device measurements To characterize signal and noise pooling on the Neuropixels 1.0 array, we immersed the probe in a saline bath containing two annular electrodes to produce an electric field gradient (Fig. a). The electrolyte was phosphate-buffered saline (Sigma-Aldrich P4417; 1× PBS contains 0.01 M phosphate buffer, 0.0027 M potassium chloride and 0.137 M sodium chloride, pH 7.4, at 25 ∘ C). We recorded from all 383 wires (recall that one wire is a reference electrode), first closing the switches in Bank 0 then in Bank 1, then in both banks (Fig. b). One set of measurements simply recorded the noise with no external field applied. Then we varied the concentrations of PBS (by factors 10 −3 , 10 −2 , 10 −1 , 1, and 10), which modulated the conductance of the bath electrolyte in the same proportions. For each of the 15 recording conditions (5 concentrations × 3 switch settings) we measured the root-mean-square noise on each of the 383 wires. Then we set to explain these 5 × 3 × 383 noise values based on the input circuitry of the Neuropixels device. After some trial-and-error we settled on the equivalent circuit in Fig. b. It embodies the following assumptions: Each electrode is a resistor R i in series with a capacitor C i . The resistor is entirely the bath resistance, so it scales inversely with the saline concentration. The shunt impedance Z S across the amplifier input is a resistor R S in parallel with a capacitor C S . The thermal noise from this R-C network and the voltage noise N amp from the amplifier and acquisition system sum in quadrature. With these assumptions, one can compute the total noise spectrum under each condition. In brief, each resistor in Fig. b is modeled as a white-spectrum Johnson noise source in series with a noiseless resistor (Thevenin circuit). The various Johnson noise spectra are propagated through the impedance network to the output voltage U . That power spectrum is integrated over the AP band (300–10,000 Hz) to obtain the total thermal noise. After adding the amplifier noise N amp in quadrature one obtains the RMS noise at the output U . This quantity is plotted in the fits of Fig. c. The result is rather insensitive to the electrode capacitance C i because that impedance is much lower than the shunt impedance Z S . By contrast, the bath resistance ( R 0 , R 1 ) has a large effect because one can raise it arbitrarily by lowering the saline concentration. To set the capacitor values, we, therefore, used the information from the Neuropixels spec sheet that the total electrode impedance at 1 kHz is 150 kΩ, 12 [12pt]{minimal}
$${C}_{i}=}}\,\ \,{{}}\, }}{{}} {{}})}^{2}-{{R}_{i}}^{2}}}$$ C i = 1 2 π ⋅ 1000 Hz ⋅ 150 k Ω 2 − R i 2 We also found empirically that the shunt impedance is primarily capacitive: R S is too large to be measured properly and we set it to infinity. Thus the circuit model has only 4 scalar parameters: R 0 , R 1 , C S , N amp . Their values were optimized numerically to fit all 15 measurements. This process was repeated for each of the 383 wires. The fits are uniformly good; see Fig. c for examples. As expected the thermal noise increases at low electrolyte concentration because the bath impedance increases (Fig. c). However, the noise eventually saturates far below the level expected for the lowest saline concentration. This reveals the presence of another impedance in the circuit that acts as a shunt across the amplifier input (Fig. a). We found that Z S ≈ 20 MΩ. Because the shunt impedance far exceeds the electrode impedances (~150 kΩ), it has only a minor effect on signal pooling, which justifies the approximations made in Eq. . The measured noise voltage also saturates at high saline concentration (Fig. c), and remains far above the level of Johnson noise expected from the bath impedance. That minimum noise level is virtually identical for the two electrodes that connect to the same wire, whether or not they are pooled, but it varies considerably across wires (Fig. d). We conclude that this is the amplifier noise N amp introduced by each wire’s acquisition system (Fig. a). Figure e shows the best-fit values of the 4 circuit parameters, histogrammed across all the wires on an unused probe. Note they fall in a fairly narrow distribution. The bath impedance of the electrodes (in normal saline) is ~13 kΩ, the shunt capacitance is ~10 pF, and the common noise N amp has a root-mean-square amplitude of ~6 μV integrated over the AP band (300–10,000 Hz). These measurements were performed on both fresh and used Neuropixels devices, with similar results. On a device previously used in brain recordings the bath impedance of the electrodes was somewhat higher: 30 kΩ instead of 13 kΩ. To measure the pooling coefficients we applied an oscillating electric field (1000 Hz) along the electrode array with a pair of annular electrodes (Fig. a). From the recorded waveform we estimated the signal amplitude by the Fourier coefficient at the stimulus frequency. Two different field gradients (called A and B) yielded two sets of measurements, each in the two split modes ( U 0,A , U 1,A , U 0,B , U 1,B ) and the pooled mode ( U P,A , U P,B ). For each of the 383 wires, we estimated the pooling coefficients of its two electrodes by solving 13 [12pt]{minimal}
$$[_{0,{{{{{{{}}}}}}}}&{U}_{1,{{{{{{{}}}}}}}}\\ {U}_{0,{{{{{{{}}}}}}}}&{U}_{1,{{{{{{{}}}}}}}}][_{0}\\ {k}_{1}]=[_{{{{{{{{}}}}}}},{{{{{{{}}}}}}}}\\ {U}_{{{{{{{{}}}}}}},{{{{{{{}}}}}}}}]$$ U 0 , A U 1 , A U 0 , B U 1 , B k 0 k 1 = U P , A U P , B These mixing coefficients k 0 and k 1 express the recorded amplitude U P in terms of the recorded amplitudes U 0 and U 1 , 14 [12pt]{minimal}
$${U}_{{{{{{{{}}}}}}}}={k}_{0}{U}_{0}+{k}_{1}{U}_{1}$$ U P = k 0 U 0 + k 1 U 1 whereas the pooling coefficients c 0 and c 1 (Eq. are defined relative to the input voltages V 0 and V 1 , namely 15 [12pt]{minimal}
$${U}_{{{{{{{{}}}}}}}}={c}_{0}{V}_{0}+{c}_{1}{V}_{1}$$ U P = c 0 V 0 + c 1 V 1 The U i differ from the V i only by the ratio of electrode impedance to shunt impedance. Given the measured value of Z S ≈ 20 MΩ that ratio is <1%, a negligible discrepancy. So the measured k 0 and k 1 are excellent approximations to the pooling coefficients c 0 and c 1 , which in turn reflect the ratio of the two electrode impedances (Eq. . In vivo recording We used a Neuropixels 1.0 probe to record neural signals from a head-fixed mouse (C57BL/6J, male, 9 months old). The probe entered the brain at 400 μm from the midline and 3.7 mm posterior from bregma at ~45 ∘ and was advanced for ~6 mm, which corresponded to all of Bank 0 and roughly half of Bank 1. This covered many brain areas, from the retrosplenial cortex at the top to the medial preoptic nucleus at the bottom. A detailed description of the mouse surgery, probe implantation, and post hoc histology and imaging of probe track can be found in a previous report . All procedures were in accordance with institutional guidelines and approved by the Caltech IACUC, protocol 1656. Once the probe was implanted, data were recorded in the following order: (1) split-mode in Bank 0 (i.e. all 384 wires connected to recording sites in Bank 0); (2) split-mode in Bank 1; (3) pooled-mode across Banks 0 and 1. Each recording lasted for ~10 min. Following brain recordings, the array was cleaned according to recommended protocol by immersion in tergazyme solution and rinsing with water. Spike-sorting For “manual” spike-sorting of the in vivo recordings, we used KiloSort1 (downloaded from https://github.com/cortex-lab/KiloSort on Apr 10, 2018). We ran the automatic template-matching step; the detailed settings are available in the code accompanying this manuscript. This was followed by manual curation, merging units, and identifying those of high quality. These manual judgments were based on requiring a plausible spike waveform with a footprint over neighboring electrodes, a stable spike amplitude, and a clean refractory period. This was done separately for each of the three recordings (split-mode Bank 0, split-mode Bank 1, pooled-mode). We implemented the “hot sorting” feature in KiloSort2 (downloaded from https://github.com/MouseLand/Kilosort2 on Mar 19, 2020). No manual curation was used in this mode, because (1) we wanted to generate a reproducible outcome, and (2) manual inspection is out of the question for the high-volume recordings where electrode pooling will be applied. We first sorted the two split-mode recordings and used their templates to initialize the fields W and U of rez2 before running the main template-matching function on the pooled recording (see the accompanying code for more details). Finally, the splits, merges, and amplitude cutoffs in Kilosort2 ensured that the final output contained as many well-isolated units as possible. We then selected cells designated as high quality (KSLabel of Good) by KiloSort2, indicating putative, well-isolated single neurons . To elaborate on the internal operations of Kilosort2: Spike-sorted units were first checked for potential merges with all other units that had similar multi-channel waveforms (waveform correlation >0.5). If the cross-correlograms had a large dip (<0.5 of the stationary value of the cross-correlogram) in the range [-1 ms, +1 ms], then the units were merged. At the end of this process, units with at least 300 spikes were checked for refractory periods in their auto-correlograms, which is a measure of contamination with spikes from other neurons. The contamination index was defined as the fraction of refractory period violations relative to the stationary value of the auto-correlogram. The default threshold in Kilosort2 of 10 percent maximum contamination was used to determine good, well-isolated units. Following spike sorting, we applied the matching algorithm based on cosine similarity (Fig. b) to determine how many cells identified in split recordings could be recovered from the pooled recording. This was compared with the results from “cold sorting”, in which the pooled recording was sorted on its own, as well as to the conventional sorting that includes manual curation (Fig. c). Unmixing pooled signals After sorting the split and pooled recordings, we computed the average waveform of every cell. Specifically, for each cell we averaged over the first n spikes, where n was the lesser of 7500 or all the spikes the cell fired during the recording. We then sought to identify every cell in the pooled recordings with a cell in the split recordings. This was done by the following procedure: Let S denote a cell sorted from the split-mode recording ( [12pt]{minimal}
$$S {{{{{{{}}}}}}}$$ S ∈ S ) and S i its waveform at channel i . Although i can range from 1 to 384 (the total number of wires available in the Neuropixels probe), we only focus on the 20 channels above and 20 channels below the channel with the largest amplitude ( [12pt]{minimal}
$$i^{}$$ i ′ ), i.e. [12pt]{minimal}
$$J=[i^{} -20,i^{} +20]$$ J = [ i ′ − 20 , i ′ + 20 ] . We wish to find the cell P from the pooled-mode recordings ( [12pt]{minimal}
$$P {{{{{{{}}}}}}}$$ P ∈ P ) that is closest to S . To do so, we compute the cosine similarity score for each pair ( S , P ): 16 [12pt]{minimal}
$${{ }}(S,P)=}}}}}}} {{{{{{{}}}}}}}}{| | {{{{{{{}}}}}}}| | | | {{{{{{{}}}}}}}| | }$$ Σ ( S , P ) = S ⋅ P ∣ ∣ S ∣ ∣ ∣ ∣ P ∣ ∣ where S and P are column vectors obtained by concatenating every S j and P j ( j ∈ J ), respectively, and ∣∣⋅∣∣ is the ℓ 2 norm. Σ is a [12pt]{minimal}
$$| {{{{{{{}}}}}}}|$$ ∣ S ∣ -by- [12pt]{minimal}
$$| {{{{{{{}}}}}}}|$$ ∣ P ∣ matrix. We identify the largest element of Σ, which corresponds to the most similar pair of S and P . We then update Σ by removing the row and column of this largest element. This process gets iterated until every [12pt]{minimal}
$$P {{{{{{{}}}}}}}$$ P ∈ P is given a best match. By manual inspection we found that pairs with similarity scores >0.9 were good matches. Estimating pooling coefficients in vivo Once each [12pt]{minimal}
$$P {{{{{{{}}}}}}}$$ P ∈ P was assigned a match [12pt]{minimal}
$$S {{{{{{{}}}}}}}$$ S ∈ S , the pooling coefficient ( k ) was computed by solving the optimization problem below for each i with a least squares method (mldivide in Matlab). 17 [12pt]{minimal}
$${{{{}}}_c}\ {}_{k_i}\ | |P_i - k_{i}S_i | |$$ eq : find c arg min k i ∣ ∣ P i − k i S i ∣ ∣ Sometimes a single recording site detected action potentials from multiple cells. As a result, its pooling coefficient could be estimated from the signal of each of these cells. Typically these estimates deviated from each other by <0.1. In these cases, we assigned the average of these values as the pooling coefficient of the recording site. When two recording sites that share a wire in pooled mode each carry a significant signal, it enables the estimation of both of their pooling coefficients. Examples of such sites are shown in Figs. d–e (up to 50 pairs in Banks 0 and 1). Simulation Generating simulated data We simulated extracellular voltage signals on 12 groups of 4 local electrodes (“tetrodes”). Each time series was sampled at 30,000 samples/s and extended over 600 s. After combining signal and noise as described below, the time series were filtered with a passband of 300–5000 Hz. Each tetrode carried spikes from a single unit. The spike waveform of the unit was chosen from an actual mouse brain Neuropixel recording, with a different waveform on each tetrode. Within a tetrode, one electrode chosen at random carried this spike at the nominal peak-to-peak amplitude, V (Fig. b). On the other three electrodes, the spike was scaled down by random factors drawn from a uniform distribution over [0,1]. The spike train was simulated as a Poisson process with a forced 2-ms refractory period, having an average firing rate r (Fig. b). Three sources of noise—biological noise N bio , thermal electrode noise N the , and common amplifier noise N com —were generated as gaussian processes. The quoted noise values (Fig. b) refer to root-mean-square amplitude over the 300–5000 Hz passband. Thermal noise was sampled independently for each electrode, but the biological noise was identical for electrodes within a tetrode, given that they likely observe the same background activity. Electrode pooling across M tetrodes was implemented by combining the voltage signals of the corresponding electrode on each tetrode, resulting in signals on four wires. In the process each electrode signal was weighted by 1/ M , then the amplifier noise was added to the resulting average. Amplifier noise was sampled separately for each wire. Tetrodes were added to the pool in a sequence determined by the spike shape of their units. We started with the two most dissimilar units as determined by the cosine similarity of their spike waveforms. Then we progressively added the unit that had the lowest similarity with those already in the pool. Sorting simulated data The simulated 4-wire time series were sorted using KiloSort2; detailed configuration settings are available in the code accompanying this paper. We found it necessary to turn off the “median voltage subtraction” during preprocessing, because that feature somehow introduced artifacts in the 4 voltage traces. This did not occur when processing electrode array data with many channels, for which the algorithm is intended. We note that an effective means of subtracting the common signal across wires may help suppress the biological noise and lead to better sorting results. When large numbers of tetrodes were pooled the signal-to-noise ratio dropped to the point where KiloSort2 could not form templates in the preprocessing step. Under those conditions, we report zero units recovered (Fig. b). Scoring simulated data Following previous reports , , the spike times of the sorted units and the ground truth units were matched and compared using the confusion matrix algorithm from ref. . We set the acceptable time error between sorted spikes and ground-truth spikes at 0.1 ms. Then we counted the number of spike pairs with matching spike times, n match , the number of unmatched spikes in the ground-truth unit, n miss , and the number of unmatched false-positive spikes in the sorted unit, n fp . To assess the quality of the match between ground-truth and sorted units we adopted the Accuracy definition in ref. : 18 [12pt]{minimal}
$${{{{{{{}}}}}}}=_{{{{{{{{}}}}}}}}}{{n}_{{{{{{{{}}}}}}}}+{n}_{{{{{{{{}}}}}}}}+{n}_{{{{{{{{}}}}}}}}}$$ Accuracy = n match n match + n miss + n fp Figure shows the accuracy distribution obtained for various degrees of pooling. Sorted units with accuracy >0.8 were counted as “recovered” from the pooled signal. For each parameter set we ran the simulation three times, randomizing the noise and the spike times. Results from the three runs are reported by mean ± SD (Fig. b). Reporting summary Further information on research design is available in the linked to this article.
The Neuropixels 1.0 probe has 960 recording sites that can be connected to 384 wires via controllable switches. The conventional mode of operation (split mode) was to connect one electrode to one wire at a time. Electrode pooling was implemented by modifying the Neuropixels API and the GUI software SpikeGLX to allow connecting up to three electrodes to each readout wire.
To characterize signal and noise pooling on the Neuropixels 1.0 array, we immersed the probe in a saline bath containing two annular electrodes to produce an electric field gradient (Fig. a). The electrolyte was phosphate-buffered saline (Sigma-Aldrich P4417; 1× PBS contains 0.01 M phosphate buffer, 0.0027 M potassium chloride and 0.137 M sodium chloride, pH 7.4, at 25 ∘ C). We recorded from all 383 wires (recall that one wire is a reference electrode), first closing the switches in Bank 0 then in Bank 1, then in both banks (Fig. b). One set of measurements simply recorded the noise with no external field applied. Then we varied the concentrations of PBS (by factors 10 −3 , 10 −2 , 10 −1 , 1, and 10), which modulated the conductance of the bath electrolyte in the same proportions. For each of the 15 recording conditions (5 concentrations × 3 switch settings) we measured the root-mean-square noise on each of the 383 wires. Then we set to explain these 5 × 3 × 383 noise values based on the input circuitry of the Neuropixels device. After some trial-and-error we settled on the equivalent circuit in Fig. b. It embodies the following assumptions: Each electrode is a resistor R i in series with a capacitor C i . The resistor is entirely the bath resistance, so it scales inversely with the saline concentration. The shunt impedance Z S across the amplifier input is a resistor R S in parallel with a capacitor C S . The thermal noise from this R-C network and the voltage noise N amp from the amplifier and acquisition system sum in quadrature. With these assumptions, one can compute the total noise spectrum under each condition. In brief, each resistor in Fig. b is modeled as a white-spectrum Johnson noise source in series with a noiseless resistor (Thevenin circuit). The various Johnson noise spectra are propagated through the impedance network to the output voltage U . That power spectrum is integrated over the AP band (300–10,000 Hz) to obtain the total thermal noise. After adding the amplifier noise N amp in quadrature one obtains the RMS noise at the output U . This quantity is plotted in the fits of Fig. c. The result is rather insensitive to the electrode capacitance C i because that impedance is much lower than the shunt impedance Z S . By contrast, the bath resistance ( R 0 , R 1 ) has a large effect because one can raise it arbitrarily by lowering the saline concentration. To set the capacitor values, we, therefore, used the information from the Neuropixels spec sheet that the total electrode impedance at 1 kHz is 150 kΩ, 12 [12pt]{minimal}
$${C}_{i}=}}\,\ \,{{}}\, }}{{}} {{}})}^{2}-{{R}_{i}}^{2}}}$$ C i = 1 2 π ⋅ 1000 Hz ⋅ 150 k Ω 2 − R i 2 We also found empirically that the shunt impedance is primarily capacitive: R S is too large to be measured properly and we set it to infinity. Thus the circuit model has only 4 scalar parameters: R 0 , R 1 , C S , N amp . Their values were optimized numerically to fit all 15 measurements. This process was repeated for each of the 383 wires. The fits are uniformly good; see Fig. c for examples. As expected the thermal noise increases at low electrolyte concentration because the bath impedance increases (Fig. c). However, the noise eventually saturates far below the level expected for the lowest saline concentration. This reveals the presence of another impedance in the circuit that acts as a shunt across the amplifier input (Fig. a). We found that Z S ≈ 20 MΩ. Because the shunt impedance far exceeds the electrode impedances (~150 kΩ), it has only a minor effect on signal pooling, which justifies the approximations made in Eq. . The measured noise voltage also saturates at high saline concentration (Fig. c), and remains far above the level of Johnson noise expected from the bath impedance. That minimum noise level is virtually identical for the two electrodes that connect to the same wire, whether or not they are pooled, but it varies considerably across wires (Fig. d). We conclude that this is the amplifier noise N amp introduced by each wire’s acquisition system (Fig. a). Figure e shows the best-fit values of the 4 circuit parameters, histogrammed across all the wires on an unused probe. Note they fall in a fairly narrow distribution. The bath impedance of the electrodes (in normal saline) is ~13 kΩ, the shunt capacitance is ~10 pF, and the common noise N amp has a root-mean-square amplitude of ~6 μV integrated over the AP band (300–10,000 Hz). These measurements were performed on both fresh and used Neuropixels devices, with similar results. On a device previously used in brain recordings the bath impedance of the electrodes was somewhat higher: 30 kΩ instead of 13 kΩ. To measure the pooling coefficients we applied an oscillating electric field (1000 Hz) along the electrode array with a pair of annular electrodes (Fig. a). From the recorded waveform we estimated the signal amplitude by the Fourier coefficient at the stimulus frequency. Two different field gradients (called A and B) yielded two sets of measurements, each in the two split modes ( U 0,A , U 1,A , U 0,B , U 1,B ) and the pooled mode ( U P,A , U P,B ). For each of the 383 wires, we estimated the pooling coefficients of its two electrodes by solving 13 [12pt]{minimal}
$$[_{0,{{{{{{{}}}}}}}}&{U}_{1,{{{{{{{}}}}}}}}\\ {U}_{0,{{{{{{{}}}}}}}}&{U}_{1,{{{{{{{}}}}}}}}][_{0}\\ {k}_{1}]=[_{{{{{{{{}}}}}}},{{{{{{{}}}}}}}}\\ {U}_{{{{{{{{}}}}}}},{{{{{{{}}}}}}}}]$$ U 0 , A U 1 , A U 0 , B U 1 , B k 0 k 1 = U P , A U P , B These mixing coefficients k 0 and k 1 express the recorded amplitude U P in terms of the recorded amplitudes U 0 and U 1 , 14 [12pt]{minimal}
$${U}_{{{{{{{{}}}}}}}}={k}_{0}{U}_{0}+{k}_{1}{U}_{1}$$ U P = k 0 U 0 + k 1 U 1 whereas the pooling coefficients c 0 and c 1 (Eq. are defined relative to the input voltages V 0 and V 1 , namely 15 [12pt]{minimal}
$${U}_{{{{{{{{}}}}}}}}={c}_{0}{V}_{0}+{c}_{1}{V}_{1}$$ U P = c 0 V 0 + c 1 V 1 The U i differ from the V i only by the ratio of electrode impedance to shunt impedance. Given the measured value of Z S ≈ 20 MΩ that ratio is <1%, a negligible discrepancy. So the measured k 0 and k 1 are excellent approximations to the pooling coefficients c 0 and c 1 , which in turn reflect the ratio of the two electrode impedances (Eq. .
We used a Neuropixels 1.0 probe to record neural signals from a head-fixed mouse (C57BL/6J, male, 9 months old). The probe entered the brain at 400 μm from the midline and 3.7 mm posterior from bregma at ~45 ∘ and was advanced for ~6 mm, which corresponded to all of Bank 0 and roughly half of Bank 1. This covered many brain areas, from the retrosplenial cortex at the top to the medial preoptic nucleus at the bottom. A detailed description of the mouse surgery, probe implantation, and post hoc histology and imaging of probe track can be found in a previous report . All procedures were in accordance with institutional guidelines and approved by the Caltech IACUC, protocol 1656. Once the probe was implanted, data were recorded in the following order: (1) split-mode in Bank 0 (i.e. all 384 wires connected to recording sites in Bank 0); (2) split-mode in Bank 1; (3) pooled-mode across Banks 0 and 1. Each recording lasted for ~10 min. Following brain recordings, the array was cleaned according to recommended protocol by immersion in tergazyme solution and rinsing with water.
For “manual” spike-sorting of the in vivo recordings, we used KiloSort1 (downloaded from https://github.com/cortex-lab/KiloSort on Apr 10, 2018). We ran the automatic template-matching step; the detailed settings are available in the code accompanying this manuscript. This was followed by manual curation, merging units, and identifying those of high quality. These manual judgments were based on requiring a plausible spike waveform with a footprint over neighboring electrodes, a stable spike amplitude, and a clean refractory period. This was done separately for each of the three recordings (split-mode Bank 0, split-mode Bank 1, pooled-mode). We implemented the “hot sorting” feature in KiloSort2 (downloaded from https://github.com/MouseLand/Kilosort2 on Mar 19, 2020). No manual curation was used in this mode, because (1) we wanted to generate a reproducible outcome, and (2) manual inspection is out of the question for the high-volume recordings where electrode pooling will be applied. We first sorted the two split-mode recordings and used their templates to initialize the fields W and U of rez2 before running the main template-matching function on the pooled recording (see the accompanying code for more details). Finally, the splits, merges, and amplitude cutoffs in Kilosort2 ensured that the final output contained as many well-isolated units as possible. We then selected cells designated as high quality (KSLabel of Good) by KiloSort2, indicating putative, well-isolated single neurons . To elaborate on the internal operations of Kilosort2: Spike-sorted units were first checked for potential merges with all other units that had similar multi-channel waveforms (waveform correlation >0.5). If the cross-correlograms had a large dip (<0.5 of the stationary value of the cross-correlogram) in the range [-1 ms, +1 ms], then the units were merged. At the end of this process, units with at least 300 spikes were checked for refractory periods in their auto-correlograms, which is a measure of contamination with spikes from other neurons. The contamination index was defined as the fraction of refractory period violations relative to the stationary value of the auto-correlogram. The default threshold in Kilosort2 of 10 percent maximum contamination was used to determine good, well-isolated units. Following spike sorting, we applied the matching algorithm based on cosine similarity (Fig. b) to determine how many cells identified in split recordings could be recovered from the pooled recording. This was compared with the results from “cold sorting”, in which the pooled recording was sorted on its own, as well as to the conventional sorting that includes manual curation (Fig. c).
After sorting the split and pooled recordings, we computed the average waveform of every cell. Specifically, for each cell we averaged over the first n spikes, where n was the lesser of 7500 or all the spikes the cell fired during the recording. We then sought to identify every cell in the pooled recordings with a cell in the split recordings. This was done by the following procedure: Let S denote a cell sorted from the split-mode recording ( [12pt]{minimal}
$$S {{{{{{{}}}}}}}$$ S ∈ S ) and S i its waveform at channel i . Although i can range from 1 to 384 (the total number of wires available in the Neuropixels probe), we only focus on the 20 channels above and 20 channels below the channel with the largest amplitude ( [12pt]{minimal}
$$i^{}$$ i ′ ), i.e. [12pt]{minimal}
$$J=[i^{} -20,i^{} +20]$$ J = [ i ′ − 20 , i ′ + 20 ] . We wish to find the cell P from the pooled-mode recordings ( [12pt]{minimal}
$$P {{{{{{{}}}}}}}$$ P ∈ P ) that is closest to S . To do so, we compute the cosine similarity score for each pair ( S , P ): 16 [12pt]{minimal}
$${{ }}(S,P)=}}}}}}} {{{{{{{}}}}}}}}{| | {{{{{{{}}}}}}}| | | | {{{{{{{}}}}}}}| | }$$ Σ ( S , P ) = S ⋅ P ∣ ∣ S ∣ ∣ ∣ ∣ P ∣ ∣ where S and P are column vectors obtained by concatenating every S j and P j ( j ∈ J ), respectively, and ∣∣⋅∣∣ is the ℓ 2 norm. Σ is a [12pt]{minimal}
$$| {{{{{{{}}}}}}}|$$ ∣ S ∣ -by- [12pt]{minimal}
$$| {{{{{{{}}}}}}}|$$ ∣ P ∣ matrix. We identify the largest element of Σ, which corresponds to the most similar pair of S and P . We then update Σ by removing the row and column of this largest element. This process gets iterated until every [12pt]{minimal}
$$P {{{{{{{}}}}}}}$$ P ∈ P is given a best match. By manual inspection we found that pairs with similarity scores >0.9 were good matches.
Once each [12pt]{minimal}
$$P {{{{{{{}}}}}}}$$ P ∈ P was assigned a match [12pt]{minimal}
$$S {{{{{{{}}}}}}}$$ S ∈ S , the pooling coefficient ( k ) was computed by solving the optimization problem below for each i with a least squares method (mldivide in Matlab). 17 [12pt]{minimal}
$${{{{}}}_c}\ {}_{k_i}\ | |P_i - k_{i}S_i | |$$ eq : find c arg min k i ∣ ∣ P i − k i S i ∣ ∣ Sometimes a single recording site detected action potentials from multiple cells. As a result, its pooling coefficient could be estimated from the signal of each of these cells. Typically these estimates deviated from each other by <0.1. In these cases, we assigned the average of these values as the pooling coefficient of the recording site. When two recording sites that share a wire in pooled mode each carry a significant signal, it enables the estimation of both of their pooling coefficients. Examples of such sites are shown in Figs. d–e (up to 50 pairs in Banks 0 and 1).
Generating simulated data We simulated extracellular voltage signals on 12 groups of 4 local electrodes (“tetrodes”). Each time series was sampled at 30,000 samples/s and extended over 600 s. After combining signal and noise as described below, the time series were filtered with a passband of 300–5000 Hz. Each tetrode carried spikes from a single unit. The spike waveform of the unit was chosen from an actual mouse brain Neuropixel recording, with a different waveform on each tetrode. Within a tetrode, one electrode chosen at random carried this spike at the nominal peak-to-peak amplitude, V (Fig. b). On the other three electrodes, the spike was scaled down by random factors drawn from a uniform distribution over [0,1]. The spike train was simulated as a Poisson process with a forced 2-ms refractory period, having an average firing rate r (Fig. b). Three sources of noise—biological noise N bio , thermal electrode noise N the , and common amplifier noise N com —were generated as gaussian processes. The quoted noise values (Fig. b) refer to root-mean-square amplitude over the 300–5000 Hz passband. Thermal noise was sampled independently for each electrode, but the biological noise was identical for electrodes within a tetrode, given that they likely observe the same background activity. Electrode pooling across M tetrodes was implemented by combining the voltage signals of the corresponding electrode on each tetrode, resulting in signals on four wires. In the process each electrode signal was weighted by 1/ M , then the amplifier noise was added to the resulting average. Amplifier noise was sampled separately for each wire. Tetrodes were added to the pool in a sequence determined by the spike shape of their units. We started with the two most dissimilar units as determined by the cosine similarity of their spike waveforms. Then we progressively added the unit that had the lowest similarity with those already in the pool. Sorting simulated data The simulated 4-wire time series were sorted using KiloSort2; detailed configuration settings are available in the code accompanying this paper. We found it necessary to turn off the “median voltage subtraction” during preprocessing, because that feature somehow introduced artifacts in the 4 voltage traces. This did not occur when processing electrode array data with many channels, for which the algorithm is intended. We note that an effective means of subtracting the common signal across wires may help suppress the biological noise and lead to better sorting results. When large numbers of tetrodes were pooled the signal-to-noise ratio dropped to the point where KiloSort2 could not form templates in the preprocessing step. Under those conditions, we report zero units recovered (Fig. b). Scoring simulated data Following previous reports , , the spike times of the sorted units and the ground truth units were matched and compared using the confusion matrix algorithm from ref. . We set the acceptable time error between sorted spikes and ground-truth spikes at 0.1 ms. Then we counted the number of spike pairs with matching spike times, n match , the number of unmatched spikes in the ground-truth unit, n miss , and the number of unmatched false-positive spikes in the sorted unit, n fp . To assess the quality of the match between ground-truth and sorted units we adopted the Accuracy definition in ref. : 18 [12pt]{minimal}
$${{{{{{{}}}}}}}=_{{{{{{{{}}}}}}}}}{{n}_{{{{{{{{}}}}}}}}+{n}_{{{{{{{{}}}}}}}}+{n}_{{{{{{{{}}}}}}}}}$$ Accuracy = n match n match + n miss + n fp Figure shows the accuracy distribution obtained for various degrees of pooling. Sorted units with accuracy >0.8 were counted as “recovered” from the pooled signal. For each parameter set we ran the simulation three times, randomizing the noise and the spike times. Results from the three runs are reported by mean ± SD (Fig. b).
We simulated extracellular voltage signals on 12 groups of 4 local electrodes (“tetrodes”). Each time series was sampled at 30,000 samples/s and extended over 600 s. After combining signal and noise as described below, the time series were filtered with a passband of 300–5000 Hz. Each tetrode carried spikes from a single unit. The spike waveform of the unit was chosen from an actual mouse brain Neuropixel recording, with a different waveform on each tetrode. Within a tetrode, one electrode chosen at random carried this spike at the nominal peak-to-peak amplitude, V (Fig. b). On the other three electrodes, the spike was scaled down by random factors drawn from a uniform distribution over [0,1]. The spike train was simulated as a Poisson process with a forced 2-ms refractory period, having an average firing rate r (Fig. b). Three sources of noise—biological noise N bio , thermal electrode noise N the , and common amplifier noise N com —were generated as gaussian processes. The quoted noise values (Fig. b) refer to root-mean-square amplitude over the 300–5000 Hz passband. Thermal noise was sampled independently for each electrode, but the biological noise was identical for electrodes within a tetrode, given that they likely observe the same background activity. Electrode pooling across M tetrodes was implemented by combining the voltage signals of the corresponding electrode on each tetrode, resulting in signals on four wires. In the process each electrode signal was weighted by 1/ M , then the amplifier noise was added to the resulting average. Amplifier noise was sampled separately for each wire. Tetrodes were added to the pool in a sequence determined by the spike shape of their units. We started with the two most dissimilar units as determined by the cosine similarity of their spike waveforms. Then we progressively added the unit that had the lowest similarity with those already in the pool.
The simulated 4-wire time series were sorted using KiloSort2; detailed configuration settings are available in the code accompanying this paper. We found it necessary to turn off the “median voltage subtraction” during preprocessing, because that feature somehow introduced artifacts in the 4 voltage traces. This did not occur when processing electrode array data with many channels, for which the algorithm is intended. We note that an effective means of subtracting the common signal across wires may help suppress the biological noise and lead to better sorting results. When large numbers of tetrodes were pooled the signal-to-noise ratio dropped to the point where KiloSort2 could not form templates in the preprocessing step. Under those conditions, we report zero units recovered (Fig. b).
Following previous reports , , the spike times of the sorted units and the ground truth units were matched and compared using the confusion matrix algorithm from ref. . We set the acceptable time error between sorted spikes and ground-truth spikes at 0.1 ms. Then we counted the number of spike pairs with matching spike times, n match , the number of unmatched spikes in the ground-truth unit, n miss , and the number of unmatched false-positive spikes in the sorted unit, n fp . To assess the quality of the match between ground-truth and sorted units we adopted the Accuracy definition in ref. : 18 [12pt]{minimal}
$${{{{{{{}}}}}}}=_{{{{{{{{}}}}}}}}}{{n}_{{{{{{{{}}}}}}}}+{n}_{{{{{{{{}}}}}}}}+{n}_{{{{{{{{}}}}}}}}}$$ Accuracy = n match n match + n miss + n fp Figure shows the accuracy distribution obtained for various degrees of pooling. Sorted units with accuracy >0.8 were counted as “recovered” from the pooled signal. For each parameter set we ran the simulation three times, randomizing the noise and the spike times. Results from the three runs are reported by mean ± SD (Fig. b).
Further information on research design is available in the linked to this article.
Peer Review File Reporting Summary
|
The Role of Predictive Biomarkers in Endocervical Adenocarcinoma: Recommendations From the International Society of Gynecological Pathologists | 6bf265ce-ad5e-41aa-89a4-359ba04b7c3d | 7969151 | Gynaecology[mh] | Several studies have addressed the integrated genomic and molecular characterization of cervical cancer including a small subset of ADC patients , . A whole exome sequencing analysis of 115 cervical carcinomas with normal paired samples included 24 ADC cases and demonstrated ELF3 and CBFB somatic mutations in 13% and 8% of cases, respectively . Moreover, the study confirmed PIK3CA (16%) and KRAS (8%) mutations, and showed that the PIK3CA/PTEN pathway was significantly mutated in the ADC group, which is relevant as this pathway is related to resistance for anti-HER2 therapies . TCGA performed an extensive molecular characterization, and included 32 ADC cases, some of them HPV negative . The study confirmed frequent PIK3CA and KRAS mutations, and ERBB3 (HER3) mutations. Frequent BCAR4 amplification, putatively associated with anti-HER2 therapy was also detected, and frequent CD274 amplification, putative targets for immunotherapy . A high-throughput genotyping platform, including 1250 known mutations in 139 cancer genes was used in 80 cervical tumors, including 40 SCC and 40 ADC cases , a vast majority of them associated with HPV. In this study, PIK3CA mutation rates did not differ significantly between ADC and SCC, whereas KRAS mutations were identified only in ADC . In a recent study in 154 cervical cancers, including 43 ADC, KRAS mutations were almost restricted to ADC patients, whereas PIK3CA mutations were more frequent in SCC. TP53 mutations were more predominant in HPV-independent tumors, and STK11 genomic alterations showed an association with lower overall survival . A transcriptomic signature with molecular networks associated with SCC and ADC was characterized using oligomicroarray and pathway analysis . Some genes ( KRT17 , IGFBP2 , CALCA , VIPR1 ) were differentially expressed in ADC and SCC. cDNA microarray analysis demonstrated differentially expressed genes specific for ADC ( CEACAM5 , TACSTD1 , S100P , and MSLN ) . In a different study , the authors assessed differential expression between ADC and SCC in a set of genes including those coding for 12-lipoxygenase ( 12-LOX ), keratin 4, trypsinogen 2 ( TRY2 ), Rh glycoprotein C ( RhGC ), collagen type V alpha 2, integrin alpha 5, integrin alpha 6, and C-MYC . The clinicopathologic and prognostic relevance of KRAS mutation was assessed in a series of 876 invasive cervical carcinomas, which included 210 ADC cases . KRAS mutations were associated with HPV18, and more frequently detected in nonsquamous carcinoma, with a frequency of 7.3% in ADC. The presence of KRAS mutations was an independent predictor for tumor recurrence. Another study on cervical cancers, including 55 ADC, analyzed by mass spectrometry by assessing 171 somatic hot-spot mutations, identified KRAS mutations in 24% of ADC in comparison with 3% of SCC cases . In multivariate analysis, however, mutation status was not an independent predictor of survival. Cervical ADC occasionally shows HER2 overexpression. In one study , 46% of ADC showed positive expression for EGFR and HER2, which significantly correlated with lymph node metastasis, stage, and short relapse-free survival. HER2 expression significantly correlated with tumor size. In a different study , HER2 expression was assessed in 13 cases of gastric-type ADC. Immunostaining was equivocal in six cases and ERBB2 (HER2) amplification was identified in one case. Relevance of HER2 mutations will be discussed latter on. A few studies have characterized HPV-independent ADC, including the gastric type . Banister et al. analyzed a series of 212 SCC and 44 ADC, to characterize HPV-independent cervical cancers. HPV-associated tumors expressed E2F target genes and increased AKT/MTOR signaling while HPV-independent tumors had increased WNT/β-catenin and Sonic Hedgehog signaling. HPV-independent tumors showed a global decrease in DNA methylation, although there was some promoter-associated CpGs hypermethylation. HPV-independent tumors were enriched for nonsynonymous somatic mutations in TP53, ARID , as well as WNT, and PI3K pathways. Garg et al. used next-generation sequencing for 161 unique cancer-driver genes for single-nucleotide and copy-number variations, gene fusions, and insertions/deletions in 14 cases. TP53 was the most frequently mutated gene followed by MSH6 , CDKN2A/B , POLE , SLX4 , ARID1A , STK11 , BRCA2 , and MSH2 . Abnormal p53 expression was observed in 9 cases by immunohistochemistry, whereas MDM2 gene amplification in 12q15 locus was seen in 2 cases that express normal p53 levels by immunohistochemistry. Hodgson et al. performed a targeted massively parallel sequencing assay of 447 cancer genes and 191 regions across 60 genes for rearrangement detection in 56 ADC samples that included 45 HPV-associated and 11 gastric-type tumors. KRAS , TP53 , and PIK3CA were the most commonly mutated genes, whereas alterations in TP53 , STK11 , CDKN2A , ATM , and NTRK3 were significantly more common in gastric-type ADC. Tumors associated with adverse outcome, regardless of the histologic type, more commonly had alterations in KRAS , GNAS , and CDKN2A . The association between cervical ADC and STK11 had been previously noted , based on the relationship between minimal deviation gastric-type ADC and Peutz-Jeghers syndrome. As mentioned in previous publications, the pattern of ADC invasion, according Silva criteria has prognostic relevance. The Silva classification, however, is limited to HPV-associated cervical ADC. The molecular profile of cervical ADC has been associated with the Silva pattern of invasion, by using targeted sequencing with the Ion AmpliSeq Cancer Hotspot Panel v2 that assesses hotspot regions of 50 oncogenes and tumor suppressor genes . Mutations were frequently found in PIK3CA (30%), KRAS (30%), MET (15%), and RB1 (10%). PIK3CA, KRAS , and RB1 mutations were seen exclusively in pattern B or C subgroups, whereas KRAS mutations correlated with advanced stage at presentation. Additional studies have shown molecular abnormalities in cervical ADC at different levels in genes such as ZNF58S, SOX1, SOX17, EZH2 , and L1CAM – . A vast majority of patients with advanced cervical ADC are treated by combined radiation and chemotherapy. The mechanisms of resistance to these anticancer treatments are complex. There is a large amount of literature suggesting putative markers involved in response to treatment. It is not the intention of this section to provide a comprehensive review on this topic. The vast majority of the publications refers to cervical cancer in general, without emphasis on cervical ADC, which is important, as there are some studies suggesting poor response to radiation therapy in ADC, in comparison with SCC – . In one review of 19 publications on the mechanisms involved in resistance to radiation therapy , the authors identified a total of 23 biomarkers, which could be related to 6 biologic functions, such as apoptosis, cell adhesion, DNA repair, hypoxia, metabolism, pluripotency, and proliferation. In a different review of published studies , the authors identified 6 immunohistochemical markers with controversial correlation with chemoradiotherapy response (p53, p21, Ki67, EGFR, HER2, BCL-2), and 11 immunohistochemical biomarkers with positive correlation with chemoradiotherapy (HPV, pAKT, COX-2, nitric oxide synthase, HIF-1-alpha, HIF-2-alpha, VEGF, NF-kb, Ku80, EMMPRIN). Moreover, microarray studies have also suggested that the expression of sets of genes were associated with and without recurrence after radiation therapy – . Several processes and proteins have been related to cisplatin resistance in cervical cancer , including: (1) a reduction in the intracellular accumulation of platinum compounds (CTR1, multidrug resistance proteins, GSH), (2) increase in DNA damage repair, (3) inactivation of apoptosis (caspases, BCL family, NF-kb, p53 signaling), (4) activation of epithelial to mesenchymal transition, (5) and other mechanisms such as alteration in DNA methylation, microRNA profile, stemness, and stress response. D44v6, XRCC, and mTOR were also related to the prediction of sensitivity to platinum-type agents in neoadjuvant chemotherapy . Some other biomarkers have been related to sensitivity to specific agents, such as CHFR in the prediction of sensitivity to paclitaxel, WRN in relation to sensitivity to CPT-11, and HIF-1α in the prediction of sensitivity to topotecan . Neoadjuvant treatment would provide a novel window of opportunity to study response and biomarker relationships. It would be helpful if pathologists develop a standardized approach to assess response to neoadjuvant treatments. Different strategies have been proposed in the treatment of cervical cancer. Yet again, most studies and clinical trials do not consider ADC patients separately, so the information must be taken with caution. Angiogenesis is a critical process in carcinogenesis and tumor progression. HPV oncoproteins play key roles in upregulating angiogenesis, through their effects on p53 degradation and inactivation of pRb, which lead to increased VEGF pathway and HIF-1-alpha expression . Angiogenesis has been successfully targeted in cervical cancer, as the results of the GOG 240 trials (including 310 patients with SCC and 86 with ADC) and subsequent trials were published – . Since then, bevacizumab was approved by the FDA and became standard of care in a subset of patients with advanced cervical cancer. No predictive biomarker of antiangiogenic response has reached clinical practice. Several other drugs and corresponding predictive biomarkers have been proposed , . They include EGFR inhibitors – and PARP-1 inhibitors , , because of the expression of EGFR and presence of homologous recombination-related gene mutations in cervical cancer. None of them, however, have reached clinical practice. Tisotumab vedotin, an antibody-drug conjugate targeting tissue factor has got encouraging results , but no specific predictive biomarker has been proposed. A promising targeted therapy approach at present is ERBB2 (HER2) and ERBB3 (HER3) , the genes that encode for HER2 and HER3. As mentioned before, HER2 overexpression and HER2 amplification were previously shown in cervical ADC. Somatic mutations in ERBB2/3 (HER2/3) were found in a wide range of cancers , and lead to constitutive HER2/3 activation. HER2 mutations were detected in 4% to5.5% of cervical cancers , . PIK3CA mutations represented one of the most frequent co-alterations in HER2 -mutant cancers ; and this is a problem, as PIK3CA mutations are known to result in resistance to anti-HER2 treatment . These preliminary studies have shown that a subset of patients with cervical cancer and HER2 inhibition achieved complete/partial response and stable disease in basket trials . In one study with 1015 patients with cervical cancer, HER2 mutations were found in 4.5% ADC, but only in 2.1% SCC . HER2 mutations frequently coexisted with PIK3CA or KRAS mutations. In that series of cases, 33 nonsynonymous somatic HER2 mutations were detected, including 30 missense mutations and 3 in-frame deletions. Nineteen HER2 mutations were located within the extracellular domain, four in the transmembrane domain, and 10 in the Kinase domain. The most prevalent mutation spot was S310F (6 cases), followed by A270S (5 cases). Among patients who were tested for both HER2 gene mutations and overexpression/amplification, no concurrence of mutation and overexpression/amplification was found. A case report has shown successful result of HER2 inhibition in 1 patient with advanced cervical ADC with HER2 amplification . It appears that HER2 inhibition can be an interesting tool for ADC patients with HER2 mutation or amplification, and maybe with BCAR4 amplification. A combined therapy targeting simultaneously HER2 and PIK3CA has also been suggested . Pathologists have experience in the quality control of HER2 expression assessment , . Interpretation of predictive biomarkers, such as HER2, has shown to be context specific, as seen in differences in criteria for breast and gastric carcinoma . Therefore, it is worth mentioning that there is still insufficient experience on how to score HER2 immunohistochemistry in the context of cervical ADC. Gynecologic pathology studies focusing on scoring and quantitating HER2 expression in cervical ADC should be encouraged. The main objective of cancer immunotherapy is to enhance tumor antigen-specific immune responses that can target tumor cells. Many different studies have demonstrated that immunotherapy may be helpful in the treatment of a variety of tumors. The emergence of immune checkpoint inhibitors has opened a new door to cancer therapy. Cervical cancer is a good candidate tumor for immunotherapy approaches. There are several reasons for this. Cervical cancer has a relatively high rate of tumor mutational burden , frequent amplification of immune targets , and frequent involvement of HPV. There is increasing evidence showing that immune checkpoint inhibitors may have a potential role in the treatment of virus-related cancers . It has been shown that HPV E7 may increase PD-L1 expression after transfection into cancer cells . Immune checkpoints such as programmed death 1 (PD-1) and cytotoxic T lymphocyte antigen 4 (CTLA-4) are membrane-bound molecules, which are expressed on immune cells. Immune checkpoint inhibitors block the binding of immune checkpoint molecules to their ligands, reversing the inactivation of T cells, enhancing the immune response of T cells. These inhibitors may have a role in virus clearance and may have a greater effect in virus-associated cancers . In a recent overview about the role of biomarkers for the prediction of response to checkpoint immunotherapy , it is shown that cervical cancer is frequently positive for PD-L1, and show a moderate mutational burden, with 5-6 mutations per megabase. Higher ratios of CD8+ tumor-infiltrating lymphocytes to CD4 + T regulatory cells have been associated with improved survival. Response rate of cervical cancer to checkpoint immunotherapy is within the range of 10% to 25%. PD-L1 expression was assessed in 2 cohorts of primary cervical carcinomas (156 SCC and 49 ADC), and matched primary and metastatic tumors (96 SCC and 31 ADC) using the E1L3N clone on an automated Ventana immunostainer. Tumor cells were designated positive when >5% of tumor cells were positive. Distinction was made between diffuse (throughout the tumor), or marginal (interphase between tumor and stroma). Scores were also calculated for PD-L1-positive tumor-infiltrating immune cells. SCC was more frequently positive for PD-L1 and contained more PD-L1-positive tumor-associated macrophages. Disease-specific survival was significantly worse in ADC patients with PD-L1-positive tumor-associated macrophages compared with ADCA patients without PD-L1-positive tumor-associated macrophages. No difference between primary and metastatic tumors was seen. In another study , PD-L1 (clone SP142), by combining intensity and percentage of positive cells, was expressed in 32 of 93 (34.4%) cervical carcinomas, including 2 of 12 (16.7%) ADC. A meta-analysis including seven studies with 783 patients also suggested that PD-L1 overexpression was associated with poor overall survival. The methodology was different, and the number of ADC cases was variable – . One study including 127 samples was limited to ADC . The density of immune cells and expression levels were compared between the tumor cell groups and stroma, using digital image analysis. Expression of PD-L1 on tumor cells was found in 17.3% of the cases. A higher density of stroma-infiltrating lymphocytes and macrophages was found in PD-L1-positive tumors than in negative tumors. In this study, patients with PD-L1-positive tumors tended to experience longer survival. In one study with 97 patients, 7 of them ADC , PD-L1 expression correlated with tumor-infiltrating lymphocytes, and response to neoadjuvant chemotherapy. Four phase 1 and 2 clinical trials assessed the value of check point inhibitors in cervical cancers. In 3 of them, ADC patients were included. In one of them , Ipilimumab was administered to 42 previously treated patients with cervical cancer, 13 of them with ADC. PD-L1 expression, as assessed by E1L3N clone, was negative in 20 patients, positive (10%) in 4, and positive (>10%) in additional 4 patients. There was partial response in 1 patient and stable disease in 10. PD-L1 expression was not predictive of therapeutic benefit and PD-L1 expression did not change during treatment. In the Keynote-028 trial , Pembrolizumab was administered to 22 previously treated patients with cervical cancer, including a single patient with ADC. PD-L1 expression, assessed by the 22C3 clone with a cutoff of >1% was positive in tumor cells in 18 cases, and in 6 cases in both tumor and stromal cells. There was partial response in 4 patients, and stable disease in 3 patients. Finally, in the Keynote-158 study , Pembrolizumab was administered to 98 patients with previously treated cervical cancer, including 5 patients with ADC. PD-L1 was assessed by the 22C3 clone, by using the combined positive score (CPS) (>1), which is a ratio of tumor lymphocytes and macrophages by the total of tumor cells. All ADC were positive (CPS > 1). The objective response rate was higher in patients with PD-L1-positive tumors. No responses were observed in patients with PD-L1-negative tumors, but the number of cases was too small to draw conclusions. After publication of the Keynote-158 trial, the Food and Drug Administration (FDA) approved pembrolizumab for patients with recurrent or metastatic cervical cancer with disease progression on or after chemotherapy, whose tumors express PD-L1 (CPS of 1 or higher), as determined by the FDA-approved companion test, by 22C3 clone. Until new data is provided (with additional clinical trials with other drugs, a significant proportion of ADC patients, and assessment the different antibodies available as best companion diagnostic test), it seems reasonable to give support to the current FDA-approved guidelines. There are several ongoing phase III randomized trials (Keynote-826,-NCT03635567, BEATcc-NCT03556839, GOG3016-NCT03257267) with several immune checkpoint inhibitors in women with metastatic and/or recurrent cervical cancers. Tumor microenvironment can have an impact on prognosis. Several studies have shown an improved survival associated with an increase in the number of tumor-infiltrating lymphocytes , . There is an association between a high number of intratumor CD8 + lymphocytes and absence of lymph node metastasis . However, the perspectives of immunotherapy in cervical carcinoma go beyond checkpoint inhibitors. TIM3 is a candidate target that is expressed on immune cells, and contributes to immune tolerance . TIM3 is expressed in cervical tumors, and may be associated with tumor progression . Other interesting strategies are therapeutic vaccines and adoptive cell therapies. Recommendation 1 Expert gynecologic pathologists should take the lead in developing robust guidelines for testing and scoring HER2 and PD-L1 immunohistochemistry to facilitate standardization in clinical trials. It is strongly recommended to interpret and report predictive biomarkers to response of treatment in endocervical ADC in correlation with well-established pathologic parameters. Recommendation 2 Until specific recommendations are validated for endocervical ADC, prediction of immunotherapy response criteria is identical to that for squamous cervical cancer. At present, PD-L1 immunohistochemistry (CPS of 1 or higher), as determined by the FDA-approved companion test, by 22C3 clone, is recommended for pembrolizumab treatment of patients with recurrent or metastatic cervical cancer with disease progression on or after chemotherapy. Recommendation 3 With the exception of PD-L1, and based on the lack of scientific evidence at the present time, no other biomarker is recommended for the prediction of treatment response in endocervical ADC. Expert gynecologic pathologists should take the lead in developing robust guidelines for testing and scoring HER2 and PD-L1 immunohistochemistry to facilitate standardization in clinical trials. It is strongly recommended to interpret and report predictive biomarkers to response of treatment in endocervical ADC in correlation with well-established pathologic parameters. Until specific recommendations are validated for endocervical ADC, prediction of immunotherapy response criteria is identical to that for squamous cervical cancer. At present, PD-L1 immunohistochemistry (CPS of 1 or higher), as determined by the FDA-approved companion test, by 22C3 clone, is recommended for pembrolizumab treatment of patients with recurrent or metastatic cervical cancer with disease progression on or after chemotherapy. With the exception of PD-L1, and based on the lack of scientific evidence at the present time, no other biomarker is recommended for the prediction of treatment response in endocervical ADC. Radiation therapy is an effective treatment for local tumor control, but may also elicit a systemic effect, which can lead to an antitumor effect that can kill cancer cells outside of the radiation filed. This been reported as the Abscopal effect , . The mechanisms responsible for the Abscopal effect are not well understood, and the immune system is thought to play an important role. It has been suggested that immune modulation from PD-1/PD-L1 inhibitors and radiation therapy through nonredundant pathways may contribute to synergistic activity, which is the basis of combination of radiation therapy and immunotherapy. Some studies show increased PD-L1 positivity in tissue samples, after radiation therapy . To date, no definitive data can be obtained from the literature regarding predictive biomarkers for treatment response in cervical ADC (Table ). So far, clinical trials have predominantly included patients with SCC. Clinical trials specifically designed for endocervical ADC patients are encouraged to elucidate the predictive value of HER2 amplification and mutations as well as PD-L1 expression. Involvement of pathologists in designing these clinical trials is needed to identify new predictive biomarkers in cervical ADC. Although clinical trials are not the main domain of gynecological pathologists, it is important to emphasize that their involvement is needed for ideal methodologic strategy. Pathologists should take the lead in developing robust guidelines for testing and scoring HER2 and PD-L1 immunohistochemistry to facilitate standardization in clinical trials. Given the relative rarity of ADC, an international multi-institutional effort is required to move this field forward, particularly to recruit enough patients with HPV-independent ADC to achieve the appropriate statistical power for an HPV-independent arm. Recommendation 4 Clinical trials specifically designed for HPV-associated and HPV-independent endocervical ADC patients are strongly encouraged to elucidate the predictive value of some biomarkers (ERBB2 PD-L1, and others). Trials combining the unbalanced number of patients with ADC (including HPV-independent disease) and SCC may yield results not necessarily applicable to endocervical adenocarcinoma patients. Recommendation 5 Involvement of expert gynecologic pathologists in the design of future clinical trials is strongly recommended to appropriately identify new predictive biomarkers in cervical adenocarcinoma. Clinical trials specifically designed for HPV-associated and HPV-independent endocervical ADC patients are strongly encouraged to elucidate the predictive value of some biomarkers (ERBB2 PD-L1, and others). Trials combining the unbalanced number of patients with ADC (including HPV-independent disease) and SCC may yield results not necessarily applicable to endocervical adenocarcinoma patients. Involvement of expert gynecologic pathologists in the design of future clinical trials is strongly recommended to appropriately identify new predictive biomarkers in cervical adenocarcinoma. |
Interpreting SARS-CoV-2 seroprevalence, deaths, and fatality rate — Making a case for standardized reporting to improve communication | a5a7c44d-2e19-406c-ba83-edd8b9113f52 | 7810031 | Health Communication[mh] | Introduction The virus causing the Coronavirus Disease 2019 (COVID-19) pandemic, severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), was first identified in December 2019 and has now infected people worldwide. We observe significant differences in the risk of dying from COVID-19 when comparing the numbers of cases and deaths reported in different cities, states and countries. For example, at the end of May 2020, the proportion of total deaths in the age group of ≥ 80 years was 73.4% in the UK, 43.8% in Ireland, and 8.3% in Mexico . Does this mean the virus is more deadly for elders in one place than another? The reported headline figures are affected by several factors that vary from one specific location to another. These factors include the number of people tested for the virus, the access to healthcare, local social distancing guidelines, how a COVID-19 death is defined and the proportion of the population who are especially vulnerable to the virus, among others. Understanding the differences in COVID-19 fatality rates between regions and countries requires careful interpretation of how seroprevalence, the percent of the population positive for infection, is estimated and how healthcare providers record and report the numbers of cases and deaths. The case fatality rate (CFR), given by the ratio of deaths divided by the number of documented infections, was 12.2% in Italy, 4.9% in Spain, 3.0% in Brazil, and 3.0% in the US as of September 17, 2020 . Just two months later on November 12, 2020, these numbers converged. The CFR for each was 4.9%, 2.8%, 2.9%, and 2.3%, respectively. How do we determine which numbers accurately assess the danger and risk this virus poses? To have a reasonable estimate of the actual risk of death, we need to understand the difference between the CFR and the infection fatality rate (IFR). As opposed to the CFR, the IFR is given by the ratio of deaths divided by the total number of actual infections by SARS-CoV-2 in the population. In an ideal world, every individual would be tested, and the CFR and IFR would converge to the same number. Without this information, we lean on estimates for the total number of infections. The CFR has the benefit of being calculated with raw data and can be useful to determine how well hospitals treat COVID-19 cases. Therefore, what might explain the decline in CFR mentioned above are the large strides we have made in the treatment of the disease and protection of those who are vulnerable. Yet, because the CFR does not consider the portion of asymptomatic and mild undocumented infections, it still underestimates the total number of people infected and overestimates the disease’s actual mortality in broader applications. Despite this shortcoming, CFR has been the most commonly used value when referring to the COVID-19 pandemic’s mortality risk. The IFR, in contrast, estimates the disease mortality by considering the total number of infected individuals. As such, it is essential to have trustworthy estimates of the IFR, so policymakers on local, state, and federal levels can make informed decisions. The challenge is that it is very difficult to determine the total number of infected individuals unless systematic sampling and sophisticated statistical inferences are carried out. The question now is how exactly do we make reliable IFR estimates? Studies that attempt to tackle this problem use fundamental assumptions to carry out fatality rate estimations. These assumptions must be identified and documented to adjust for their effects on the field’s output measurements. Thus, when different results emerge for studies aimed at measuring the same phenomena, the reasons they differ can be identified. When done properly, diligent reporting of a study’s assumptions, methods and results can lead to a better understanding of the origins of the study’s findings and how they might be applied to other circumstances. Accordingly, for a pandemic as widespread as COVID-19, reporting the critical assumptions, variables and contextual details – metadata – is critical. Every value, from IFR to hospitalization rate, describing the state of the pandemic has several different estimates, each showing COVID-19 in a different light. Not being able to properly interrelate and apply this data can cost lives. Our analysis dissects epidemiological studies related to COVID-19 and brings to light the key determinants of how the SARS-CoV-2 virus spreads and causes deaths. Based on this analysis, we identify factors that contextualize the fatality rate estimates. These factors are critical to fully understanding the origin of the rate’s qualitative value. The idea of reporting the metadata for estimates can be extended to other epidemiological parameters. We believe the implementation of core metrology principles in epidemiology can help explain the discrepancies between reported pandemic values and improve how policymakers, the media and the general public use the data measured by healthcare providers and epidemiologists. First, we will delve into the uncertainties of the pandemic’s spread as we understand it today.
The problem lies in documenting and estimating infection While the number of deaths is relatively concrete, the large number of infections with mild or no symptoms can leave many infections hidden from data banks. Thus, estimations of the seroprevalence are both important and complex. Yet, when attempting to decipher the number of infections in a population, the impact of more minute details is often overlooked. It is important to first understand the characteristics of a population before deliberating on the many studies that estimate the virus’s total spread. Then, one can more objectively assess how the metadata of a study has affected the results. Since the start of the pandemic, serology-informed studies have been used to estimate seroprevalence. With information from serology studies, virus transmission, and deaths, models can estimate or forecast the seroprevalence and pandemic dynamics. Serology-informed studies and models have differences and similarities in how they must be analyzed to maximize their applicability and accuracy within a given context. We will begin with mathematical models. 2.1 How are models utilized to estimate the infection fatality rates? Parameters are critical for the analysis of models intended to estimate and forecast the dynamics of any pandemic. The parameter types and values are intrinsically linked to the contextual and environmental details of a lab or field study. It can be difficult to ensure the accuracy of parameters; nevertheless, the source of the parameter derivation can and should be reported. Implementing this reasoning is critical not only for those creating the model but also for those looking to apply the model. It is not lost on us that models and their assumptions come with their inherent variability and complexity. It is our goal to help diminish uncertainty and expand the number of well-informed predictions. A study by Ioannidis, et al. reviewed the state of COVID-19 models as of late August 2020 and identified factors that lead to poor forecasting. Table 3 in the paper by Ioannidis, et al. lists potential reasons for the failure of COVID-19 forecasting and most are rooted in the quality of the parameters. Included in the table are examples of poor data input on key features of the pandemic (such as inflated mortality and transmission rates), incorrect assumptions regarding the demographic heterogeneity of populations, lack of incorporation of epidemiological features (such as age structure and comorbidities), poor use of past evidence on the effects of current interventions (using observational data of questionable quality and applicability to the current pandemic circumstance) and examining only one or a few dimensions of the problem (no consideration of other potential conflicting factors). Models must be informed on the key determinants of the pandemic and provide transparency on the parameter derivations to ensure accurate interpretation by readers. Awareness of the various determinants playing into the pandemic is a critical first step for those forecasting and estimating values. Values can be cherry-picked from studies without reporting the metadata that supports these parameter values, and long-term consequences can arise from hasty and uninformed decision-making by readers of those published findings. In epidemiology and public health, forecasting models are often published without supporting metadata for their parameters. As a précis, models , , , , used earlier in the pandemic to make forecasts lacked critical metadata. We must encourage accountability and rigor in models to minimize the potential for negative health impacts on populations. 2.2 How are serology studies utilized to estimate the infection fatality rates? While models give insight into the pandemic’s general dynamics, accurately estimating seroprevalence is essential for the implementation of forecasting models. The challenge is that seroprevalence estimates are difficult to obtain as they have many variables that come into play. The goal of serology studies is to provide a better look at the pandemic’s spread by sampling directly from the population. Ideally, those conducting this type of study collect blood samples from a large set of people, representative of the broader population, and test for antibodies against the disease pathogen. One example is a serology-based study in Spain , which tested 35,883 households in the non-institutionalized population (i.e., excluding people in hospitals, prisons, convents, nursing homes and other collective residencies) for antibodies against the SARS-CoV-2 spike protein. This study was conducted in late May 2020, when Spain had strict social distancing guidelines and had its steepest rise in cases before the recent resurgence seen throughout the world. The study reported an estimated seroprevalence of 5.0% for the entire Spanish population of 46.9 million. With a death count of 26,920 as of May 11, 2020, the IFR was 1.15% . This describes a broad overview of how a serology study will estimate the IFR of a population. 2.3 How do we determine the accuracy of IFR estimates? Meta-analyses of seroprevalence studies exhibit a range of values for the IFR. Over time, the portion of the population that is infected can increase and decrease, as well as spread into new groups that vary in vulnerability. This results in a dynamic mortality rate. Many studies have been conducted in different countries, sometimes even multiple in one region. If the goal is to determine what causes variability in the IFR estimates, whether it is the study’s methods, location, time or population demographics, then we must be aware of these details for each study. Here we will focus on two meta-analyses. Meyerowitz-Katz & Merone collected and sorted through articles using data from February to April 2020 and arrived at 13 total estimates (8 modeled estimates and 5 observational estimates). The overall IFR estimate was 0.75% (95% CI: 0.49%–1.01%), and there was no detectable pattern among the locations of the studies, the dates, or types of study. Additionally, they reported a high heterogeneity (I 2 exceeding 99%) within the data, suggesting the point estimates for IFR used may not be reliable. Meyerowitz-Katz & Merone mention the lack of age-stratified data and the variability of methods across the studies as possible reasons for skewing the data either higher or lower. Ioannidis also performed a meta-analysis of 36 seroprevalence studies from across the globe published from early April to early July 2020. There are just two articles , used in both the Meyerowitz-Katz & Merone and Ioannidis studies. The Ioannidis meta-analysis extracted the location, recruitment and sampling strategy, dates, sample size, types of antibody (IgG, IgM, IgA), estimated crude seroprevalence and adjusted seroprevalence. Additionally, the paper extracts the reasons for adjusting the seroprevalence to vent for the factors that cause uncertainty. Only studies with a sample that approximates the general population and with a size of at least 500 were included. The paper also corrected the IFR estimates based on the number of antibodies tested for by dividing each estimated IFR by 1.1 for every antibody they did not test for. The seroprevalence estimates varied widely, ranging from 0.222% in Rio Grande do Sul, Brazil to 47% in Brooklyn, New York . IFR estimates converged to a tighter range of 0.02% in Kobe, Japan to 1.63% in Louisiana, USA , excluding the four 0.00% IFR estimates where deaths were insignificant or zero. The median IFR estimate across the 32 locations was 0.27%. With the large range of seroprevalence and IFR estimates, readers and experimentalists require a way to filter for inaccuracies and robustness. The next section of this article will assist by investigating the factors we believe to be causing the uncertainty.
How are models utilized to estimate the infection fatality rates? Parameters are critical for the analysis of models intended to estimate and forecast the dynamics of any pandemic. The parameter types and values are intrinsically linked to the contextual and environmental details of a lab or field study. It can be difficult to ensure the accuracy of parameters; nevertheless, the source of the parameter derivation can and should be reported. Implementing this reasoning is critical not only for those creating the model but also for those looking to apply the model. It is not lost on us that models and their assumptions come with their inherent variability and complexity. It is our goal to help diminish uncertainty and expand the number of well-informed predictions. A study by Ioannidis, et al. reviewed the state of COVID-19 models as of late August 2020 and identified factors that lead to poor forecasting. Table 3 in the paper by Ioannidis, et al. lists potential reasons for the failure of COVID-19 forecasting and most are rooted in the quality of the parameters. Included in the table are examples of poor data input on key features of the pandemic (such as inflated mortality and transmission rates), incorrect assumptions regarding the demographic heterogeneity of populations, lack of incorporation of epidemiological features (such as age structure and comorbidities), poor use of past evidence on the effects of current interventions (using observational data of questionable quality and applicability to the current pandemic circumstance) and examining only one or a few dimensions of the problem (no consideration of other potential conflicting factors). Models must be informed on the key determinants of the pandemic and provide transparency on the parameter derivations to ensure accurate interpretation by readers. Awareness of the various determinants playing into the pandemic is a critical first step for those forecasting and estimating values. Values can be cherry-picked from studies without reporting the metadata that supports these parameter values, and long-term consequences can arise from hasty and uninformed decision-making by readers of those published findings. In epidemiology and public health, forecasting models are often published without supporting metadata for their parameters. As a précis, models , , , , used earlier in the pandemic to make forecasts lacked critical metadata. We must encourage accountability and rigor in models to minimize the potential for negative health impacts on populations.
How are serology studies utilized to estimate the infection fatality rates? While models give insight into the pandemic’s general dynamics, accurately estimating seroprevalence is essential for the implementation of forecasting models. The challenge is that seroprevalence estimates are difficult to obtain as they have many variables that come into play. The goal of serology studies is to provide a better look at the pandemic’s spread by sampling directly from the population. Ideally, those conducting this type of study collect blood samples from a large set of people, representative of the broader population, and test for antibodies against the disease pathogen. One example is a serology-based study in Spain , which tested 35,883 households in the non-institutionalized population (i.e., excluding people in hospitals, prisons, convents, nursing homes and other collective residencies) for antibodies against the SARS-CoV-2 spike protein. This study was conducted in late May 2020, when Spain had strict social distancing guidelines and had its steepest rise in cases before the recent resurgence seen throughout the world. The study reported an estimated seroprevalence of 5.0% for the entire Spanish population of 46.9 million. With a death count of 26,920 as of May 11, 2020, the IFR was 1.15% . This describes a broad overview of how a serology study will estimate the IFR of a population.
How do we determine the accuracy of IFR estimates? Meta-analyses of seroprevalence studies exhibit a range of values for the IFR. Over time, the portion of the population that is infected can increase and decrease, as well as spread into new groups that vary in vulnerability. This results in a dynamic mortality rate. Many studies have been conducted in different countries, sometimes even multiple in one region. If the goal is to determine what causes variability in the IFR estimates, whether it is the study’s methods, location, time or population demographics, then we must be aware of these details for each study. Here we will focus on two meta-analyses. Meyerowitz-Katz & Merone collected and sorted through articles using data from February to April 2020 and arrived at 13 total estimates (8 modeled estimates and 5 observational estimates). The overall IFR estimate was 0.75% (95% CI: 0.49%–1.01%), and there was no detectable pattern among the locations of the studies, the dates, or types of study. Additionally, they reported a high heterogeneity (I 2 exceeding 99%) within the data, suggesting the point estimates for IFR used may not be reliable. Meyerowitz-Katz & Merone mention the lack of age-stratified data and the variability of methods across the studies as possible reasons for skewing the data either higher or lower. Ioannidis also performed a meta-analysis of 36 seroprevalence studies from across the globe published from early April to early July 2020. There are just two articles , used in both the Meyerowitz-Katz & Merone and Ioannidis studies. The Ioannidis meta-analysis extracted the location, recruitment and sampling strategy, dates, sample size, types of antibody (IgG, IgM, IgA), estimated crude seroprevalence and adjusted seroprevalence. Additionally, the paper extracts the reasons for adjusting the seroprevalence to vent for the factors that cause uncertainty. Only studies with a sample that approximates the general population and with a size of at least 500 were included. The paper also corrected the IFR estimates based on the number of antibodies tested for by dividing each estimated IFR by 1.1 for every antibody they did not test for. The seroprevalence estimates varied widely, ranging from 0.222% in Rio Grande do Sul, Brazil to 47% in Brooklyn, New York . IFR estimates converged to a tighter range of 0.02% in Kobe, Japan to 1.63% in Louisiana, USA , excluding the four 0.00% IFR estimates where deaths were insignificant or zero. The median IFR estimate across the 32 locations was 0.27%. With the large range of seroprevalence and IFR estimates, readers and experimentalists require a way to filter for inaccuracies and robustness. The next section of this article will assist by investigating the factors we believe to be causing the uncertainty.
Determining the cause for inconsistency in fatality rates… Is it geography or serology? The answer is yes to both. There are innate differences in the population concerning the spreading event, population health and demographics that lead to the wide range of seroprevalence estimates and slightly smaller, yet significant, range of IFR estimates. If one serology study takes samples from healthy blood donors, one from the general population and another from outpatients at the local hospital, how can we be sure these geographical differences are the sole cause for the difference in the mortality rate of COVID-19. Ioannidis’ meta-analysis does well to include each study’s methods of recruitment, sample population demographics, test performance and how the virus spread in the study location. With this information, we can properly assess the significance of each estimate and form a more complete picture of the risk involved with the COVID-19 pandemic. For example, 7 , , , , , , of the 36 studies in the Ioannidis meta-analysis use blood samples exclusively taken from blood donors. Blood donors are often required to be healthy and can exclude those who have had any signs of illness in the past two weeks. This sample is biased towards healthier individuals, who are not representative of the general population, resulting in an underestimation of seroprevalence and an overestimation of the location’s IFR. Two-hundred blood donors in Oise, France gave a seroprevalence estimate of 3% while students, siblings, parents, teachers and staff in the same area recorded a seroprevalence of 25.9%. Thus, the sampling methodology can distort IFR estimates. Additionally, 5 , , , , of the 36 studies focused on locations with a death count much higher than other locations within their respective countries. Locations with these discrepant numbers of deaths will lead to an overestimation of IFR. Most studies recognize the faults in their sample population and perform corrections to the data to account for the defect; however, these corrections are conjectures. There is no exact measure of the extent these factors have changed the results. More importantly, the factor(s) accounted for vary between studies. This inconsistency makes conducting a meta-analysis difficult, and it is at the root of the problem of putting IFR estimations into action. It also poses challenges in determining the best parameters to introduce in mathematical models in order to make epidemiological forecasts or investigate outcomes for different interventions. 3.1 What factors limit the accuracy of IFR estimations? The following sections explore some of the important determinants and how they affect the IFR estimation. For estimations to be interpreted in the correct context and accurately generalized, the environment and method of analysis of the study must be discussed. Few studies, if any, acknowledge and account for the many elements that shape their results. This is understandable, as there are many factors and they come from different angles. The goal of the following sections is to elucidate the degree to which these factors can affect the IFR. By doing so, we hope to make it clear why they are important to consider and report. 3.1.1 How does the IFR vary based on age, comorbidities, and demographics? It is well known that SARS-CoV-2 has a steep gradient in risk of death when it comes to age, demographics and comorbidities. More specifically, the mortality risk increases for the elderly (age >65), those with underlying conditions and those of lower socioeconomic status. By now, the vulnerability of the above groups in this pandemic is common knowledge; however, less well-known is the degree to which each affects COVID-19 numbers. Below we will discuss each topic and their numbers to help clarify their respective effects on the COVID-19 pandemic. Evidence of how the IFR can change depending on age is also found in the serology-informed article by Ioannidis . Among the study’s lowest IFR estimates, at 0.08%, was Iran , which despite a seroprevalence of 33%, maintained a low IFR due to its very young population, with only slightly more than 1% above the age of 80. IFR estimates for the <70 age group were lower than 0.1% in all but seven locations (Belgium, Wuhan, Italy, Spain, Connecticut, Louisiana, New York), where all seven were hotbed cities of the virus at the time. There was a median of 0.05% across all locations for the <70 age group, significantly lower than the overall median of 0.27%. Additionally, a serology study in Geneva estimated the overall IFR to be 0.64%, yet for ages <50 years was <0.01%, for ages 50–64 years was 0.14%, and for ages >65 years was 5.6%. As of December 2020, the Centers for Disease Control and Prevention (CDC) Pandemic Planning Scenarios sources the Hauser et al. study as its best IFR estimates per age group: 0–19 years = 0.003%, 20–49 years = 0.02%, 50–69 years = 0.5%, 70+ years = 5.4%. The observed trends suggest that the mortality risk increases exponentially with age. The study by Ioannidis looks at the COVID-19 deaths within eight European countries and the US confirms this exponential increase in death rate for both males and females. This increase with age can also be seen in Figure 1 of the article by Guilmoto . When the virus finds ways to attack the older population, the number of deaths in elder populations will far surpass the younger and the resulting overall, age-unadjusted IFR will begin to lose quality. The significant age-related difference in mortality risk is reinforced by the sizeable portion of COVID-19 deaths coming from long-term care facilities (or nursing homes) relative to the total portion of infections the facilities contribute. In association with the International Long-Term Care Policy Network, Comas-Herrera et al. gathered evidence on long-term care facilities as they relate to COVID-19 from 26 countries where official sources made the data available. After considering the many different methods each country has taken in defining a COVID-19 death, a COVID-19 long-term care facility death and long-term care facilities themselves, the study estimates that 46% of all COVID-19 deaths have come from long-term care facilities residents based on data from 21 countries. In the US, long-term care facilities contributed to 41% of the total COVID-19 deaths as of late September 2020. This trend is relatively common in the study across countries with more than 5000 total deaths that range from 39% of deaths in Germany to 80% in Canada, and anomalies to this trend are found only in countries with less than 1000 total deaths. The disproportionate contribution of long-term care facilities to COVID-19 deaths is reasonable considering the fragility of these residents’ lives. The average length of stay in nursing homes is 2 years and people who die in nursing homes die in a median of 5 months . This suggests that the deaths of people in nursing homes largely affects the COVID-19 fatality. In the meta-analysis by Ioannidis , three studies taking place in New York , , show high overall IFR values of 0.4% , 0.68% , and 0.65% . A possible explanation for this high mortality risk would be the decision by the New York governor to allow excess COVID-19 patients to find care in nursing homes. It is not unexpected that people in nursing homes, who are there typically due to poor health conditions, would be more susceptible to this virus. Given their major contribution to the total amount of COVID-19 deaths in most countries, their under-representation throughout the majority of serology-informed studies is cause for concern. Age-specific calculations of the IFR can minimize the effect of neglecting nursing homes, but if seroprevalence in this institutionalized population is higher than in the general population, it could lead to overestimation of the IFR. Already mentioned in this paper was a large study that does not include this group of individuals, the Spain study . This study addresses many of the factors we discuss in this review, yet its inability to include institutionalized individuals may be affecting the accuracy of their results more than they initially perceive. Underlying health conditions, such as cardiovascular disease, hypertension, diabetes, chronic obstructive pulmonary disease, severe asthma, kidney failure, severe liver disease, immunodeficiency and malignancy have been linked to an increased fatality risk when infected with COVID-19. These comorbidities create another at-risk group, in addition to the elderly, that must be treated with caution. The comorbidity factor contributes significantly to the interpretation of deaths that occur in the <65 age group. The age-stratified analysis on COVID-19 mortality risk by Ioannidis in the early pandemic on 11 European countries, Canada, Mexico, India and 13 US states showed a small fraction of total deaths attributable to non-elderly people with no underlying conditions. A range of 4.5% to 11.2%, in European countries and Canada, and 8.3% to 22.7%, in US locations, was identified as the percent of total COVID-19 deaths in people below the age of 65. In Mexico and India, however, non-elderly individuals constitute the majority of the population. A noteworthy result regarding the impact of underlying diseases is that the study showed the proportion of total COVID-19 deaths linked to non-elderly people without underlying conditions ranged from just 0.65% to 3.6%, where data was available (France, Italy, Netherlands, Sweden, Georgia, and New York City). Additionally, these numbers were calculated while considering only cardiovascular disease, hypertension, diabetes and pulmonary disease as comorbidities. While these diseases contribute to the bulk of the <65 years old comorbidity population, studies still leave out other underlying diseases linked to COVID-19 with unknown contributions to this comorbidity population. Many countries and states vary in their definitions of underlying conditions as it pertains to COVID-19. Partitioning COVID-19 data according to the major comorbidities could prove beneficial to the analysis of the reported data, given the significant number of deaths that group contributes to the deaths in the <65 age group. SARS-CoV-2 also disproportionately affects people by socioeconomic status, most notably in urban areas. Yet, the following paragraph will instead evaluate the mortality rate through race and ethnicity rather than socioeconomic status. The reason for this is not to suggest there is a natural vulnerability to the virus based on race. Rather, the reason is because it is well-established in the US that minority groups are disproportionately represented in lower socioeconomic statuses . Additionally, socioeconomic status is not reported as often as race/ethnicity in most studies. Therefore, the association between socioeconomic status and race can be useful when considered appropriately. The antibody survey conducted by the New York government in late April provided seroprevalence estimates of 8.9–9.1% in White populations, 22.5–32.0% in Latino/Hispanic populations, 16.9%–22% in Black populations, and 11.7–14.6% in Asian populations. The APM research lab has independently compiled up-to-date data regarding COVID-19 deaths by race across the US and has identified that Black people, representing 12.4% of the population, have suffered 19.9% of reported COVID-19 deaths. Additionally, compared to the White population, the latest U.S. age-adjusted COVID-19 mortality rate for the Black populations are 3.0 times as high, the Indigenous people are 3.2 times, the Latino populations are 3.0 times, and the Pacific Islanders are 2.3 times. If there is a direct association between these demographic groups and socioeconomic status in a population, as is the case in major cities in the US, then studies can use this demographic measure to assess how these factors affect the mortality rate. Social factors are affecting disadvantaged groups and low-income countries which contributes to anomalies and inaccurate interpretation of the data. With higher rates of underlying conditions, less access to healthcare and more frontline jobs, among other factors, people of lower socioeconomic status are another group to carefully consider in the context of this pandemic. More awareness of this issue can help with public health measures like an increased availability of antibody and RT-PCR testing, increased awareness of disease symptoms, and more strict guidelines on personal protective equipment to help control the spread and mortality of this disease within these groups. 3.1.2 Is the sample population representative of the general population? First and foremost are the uncertainties with the sample population. While some serosurveys are deliberately unrepresentative of the larger population, such as those using blood donors as samples, others that aim for mixed, random sampling within a population can still have variability. When recruiting individuals, certain subpopulations where COVID-19 is particularly widespread, such as among nursing homes, disadvantaged communities, people experiencing homelessness and people in prisons may be under-represented in the studies. The serosurveys do not exclude these groups, rather their method of recruitment inherently makes it difficult for these groups to participate. For example, many studies were household-based, recruiting from outpatient clinics, or contacting participants via Facebook . Institutionalized populations will have a more difficult time accessing these studies as well as disadvantaged communities who do not have regular access to healthcare or technology. Recruiting fewer people from these subgroups may underestimate seroprevalence and overestimate IFR. 3.1.3 How do information delays affect the timestamp of COVID-19 data? Many delays occur over the course of SARS-CoV-2 exposure and infection. Awareness of each delay can ensure that each documented case, seroconversion, hospitalization and death is properly associated with the date it represents. Overall, studies must account for the delay between exposure and symptoms (incubation period), symptom onset and documented infection, exposure and seroconversion (formation of antibodies), symptom onset and death, and finally, death and reporting. The delay between infection and seroconversion is roughly 1 to 3 weeks . The incubation period has a median time of 4–5 days, where 97.5% of people with COVID-19 who show symptoms will do so before 11.5 days after infection . The time between symptom onset and documented infection is roughly 5 days . The delay between symptom onset to death usually falls in the range of 13 to 19 days . The delay between death and reporting is roughly 1 to 8 weeks, where roughly <25% of deaths are reported within the first few weeks and generally 75% are reported by 8 weeks . In summary, it takes roughly 1.5 to 2 weeks for a rise in infections to reflect in documented cases, one to three weeks for a population’s antibodies to represent the seroprevalence, and one month or more for reported deaths to reflect the mortality of past cases. These delays can cause incorrect associations between values of seroprevalence, cases and deaths if not appropriately considered. 3.1.4 What is the accuracy of SARS-CoV-2 testing? There are two types of tests used to test the presence of antibodies in individuals, the lateral flow immunoassay (LFIA) device and the enzyme-linked immunoassay (ELISA). Each test has a different way of analyzing a serum sample for IgG and IgM antibodies against a certain part of a virus, in this case, the SARS-CoV-2 spike protein. As mentioned above, seroconversion, or creating anti-spike protein antibodies, can take one to three weeks. Therefore, a serology-based study will calculate the seroprevalence of the population roughly two weeks before the study date. LFIA devices are used as point-of-care tests, providing results in a matter of 10 min while the ELISA can take hours and requires lab equipment, yet there is a trade-off in quality as the ELISA is typically more sensitive and specific. The sensitivity of a test refers to the likeliness of giving a true positive result. The specificity of a test refers to the likeliness of giving a true negative. The sensitivity of the LFIA and ELISA devices is assessed by calculating the percent positive results against known positive samples confirmed by the RT-PCR test, the gold-standard. 100% in this case is considered perfectly sensitive. The specificity is usually tested against pre-SARS-CoV-2 outbreak samples where 100% negativity of tested samples means perfectly specific. To assess the quality of each test, a report from the National COVID Scientific Advisory Panel tested for SARS-CoV-2 IgM and IgG antibodies using ELISA and 9 different LFIA devices. The ELISA detected IgG in 34/40 PCR-positive samples, a sensitivity of 85% (95%CI 70%–94%), where all 6 false negatives were from samples taken within at least nine days from symptom onset. It detected IgG in 0/50 pre-pandemic controls, a specificity of 100%, and in 31 of 31 positive samples taken greater than 10 days after symptom onset, a sensitivity of 100%. IgM sensitivity was lower at 70%, and all IgG false negatives were also IgM false negatives. This confirms that the accuracy of ELISA tests improves when detecting IgG antibodies in samples taken greater than 10 days after symptom onset. The ELISA OD ratio can often be refined according to the study’s preferences to prioritize either sensitivity or specificity. LFIA devices, on the other hand, ranged from 55%–70% in sensitivity and 95%–100% in specificity. Higher sensitivities for LFIA devices are reported by manufacturers, but the seroepidemiological study in Spain performed their own validation of the LFIA device they used. They reported an IgG sensitivity of 82.1%, an IgM sensitivity of 69.6% and specificities of 100% and 99.0%, respectively. Therefore, exact sensitivities of LFIA devices are variable, but IgG antibodies seem to be more reliable than IgM. The lower sensitivity of LFIA devices may result in unreliable and insufficient screening of SARS-CoV-2 infection. The National COVID Scientific Advisory Panel study considers the best-case scenario for an LFIA test to be 70% sensitivity and 98% specificity. Even if the sensitivity of the device were to improve without compromising the specificity, after 1000 tests there would be roughly 19 false positive documented infections. In a population of 5% seroprevalence, this would mean 35% of the tests are wrong. As the seroprevalence increases to 20%, 10% of results would be wrong, and at 50% seroprevalence, 3% would be wrong. This is concerning given the range of seroprevalence estimates in the meta-analysis study by Ioannidis , where only 9 of 36 studies recorded a seroprevalence ≥ 10% and only 4 were ≥ 15%. Despite this apparent flaw in the LFIA devices, the point-of-care and ELISA tests used in the Spain study still recorded similar seroprevalence estimates, 5.0% (95% CI 4.7–5.4) and 4.6% (4.3–5.0), respectively. This suggests that for large serology-informed studies, such as the one in Spain, the LFIA test could be useful as it makes for greater uptake, lower cost and easier implementation. There is more to call into question regarding testing individuals for SARS-CoV-2 infection using the reverse transcriptase–polymerase chain reaction (RT-PCR) swab test. This test uses swabs to take a sample of the subject’s upper respiratory tract and, if SARS-CoV-2 RNA is present, will use the RT-PCR technique to replicate the RNA to detectable levels. While it would be rare to see a false-positive RT-PCR test excluding instances of cross-contamination, false-negatives can occur due to poor quality or timing of the test. A study on the temporal dynamics of viral shedding and transmissibility of COVID-19 showed that viral loads in the upper respiratory tract peak at and soon after symptom onset, then decline quickly within 7 days until they reach the detection limit at around 21 days. Infectiousness, however, may decline significantly after 8–10 days of symptoms, as live virus could no longer be cultured in a study by Wölfel, et al. . Therefore, there may be a significant amount of time where individuals test positive for RT-PCR tests despite no longer being infectious to others. 3.1.5 What is known about the IgG and IgM antibody kinetics in Humans? There may also be significant differences in post-infection antibody kinetics between asymptomatic, mild and severe infections. In a clinical and immunological assessment of 37 asymptomatic and 37 symptomatic SARS-CoV-2 infections , the study found significant differences in IgM detection, where 62.2% asymptomatic were positive and 78.4% of symptomatic individuals were positive. Additionally, whereas 81.1% and 83.8% of asymptomatic and symptomatic individuals, respectively, tested positive for IgG 3–4 weeks after exposure, only 40.0% of asymptomatic and only 12.9% of symptomatic individuals became seronegative for IgG in the early convalescent phase, 8 weeks after being discharged from the hospital. Therefore, to maximize the accuracy of serosurveys, conclusions made from the data must be associated with the period of time they most accurately represent. 3.1.6 How are COVID-19 deaths defined? Across nations and states, the answer to the question “What is a COVID-19 death and what isn’t?” is serious and important, but also inconsistent. In general, there are three methods of defining and quantifying COVID-19 deaths. First is the method of recording a death as due to COVID-19 only for those who test positive, either before or after death. This method could be uniformly implemented if every person could get tested, however, there are many countries and states that are unable to do so. This results in deaths from exacerbation of chronic conditions due to COVID-19 and deaths not counted due to lack of testing. Therefore, this method can miss people with atypical symptoms and deaths not linked to the pandemic, such as limited access to health care services due to overcrowded hospitals. It could also incorrectly count those dying from unrelated causes, such as a car crash, after testing positive. Second is the method of counting deaths of people who test positive and those who are not tested but suspected of having COVID-19. Several countries, such as Belgium, Canada, England, France, Ireland, Scotland, and some regions of Spain, have used this approach . With this method comes a risk of incorrectly associated deaths to COVID-19, but it may help in providing timely data as to the scale of the pandemic’s mortality without requiring COVID-19 tests for every hospitalized individual. Unsurprisingly, these countries report higher proportions of COVID-19 deaths . The third method of quantifying COVID-19 deaths is by measuring excess deaths. This method is best for quantifying the number of deaths both directly and indirectly associated with COVID-19, capturing the full effect the pandemic has had on the public’s health. This method works by comparing the total amount of deaths that are over the expected number of deaths based on the past five years. This method will be reliable, but not for months or possibly years due to the time it takes to officially process death certificates. There may also be variability in excess deaths caused by confounding factors, such as a bad flu season, less driving accidents or decreased utilization of healthcare during the pandemic. It is important to acknowledge the different ways that COVID-19 deaths are recorded to recognize the possible underestimation, by the first method, and overestimation, by the second method, of IFR values. The third method will inherently underestimate the recent mortality rate, as deaths are being processed and documented. Over time, the overall impact of the pandemic on deaths can be evaluated, yet one will have to consider the many confounding factors in play to estimate the mortality rate of the disease itself. 3.1.7 What is the state of hospital COVID-19 cases, deaths and patient data? COVID-19-Associated Hospitalization Surveillance Network (COVID-NET) is a population-based surveillance system run by the CDC that collects data on laboratory-confirmed COVID-19-associated hospitalizations among children and adults through a network of over 250 acute-care hospitals in 14 states, covering 10% of the entire US population. This surveillance system acquires information about each case’s age group, sex, ethnicity and underlying health conditions. Cases are identified in COVID-NET if they test positive for SARS-CoV-2 and are hospitalized within 14 days of the positive test, and the data is collected using a standardized method of reporting by trained surveillance officers. Therefore, this database has the potential to present patients in a complete context. It is a prime example of how hospital data can be used to inform the public on the risk of COVID-19 in their area, providing data that gives both specific and generalized data points. For example, it can give the weekly hospitalization rate by age and can also give the proportion of cases resulting in death or release by race/ethnicity. COVID-NET also shows that 89.3% of all hospitalizations are in patients with some underlying health condition, the most common being hypertension 58.9% . While promising, there are limitations to the application of this data. First, the network was able to perform a detailed analysis of comorbidity and ethnicity only for hospitalizations in March due to the large amount of time needed to process this data. There were 1,482 hospitalizations in their system for that month, and just 180 (12.1%) contained data regarding comorbidities. The only cases reported on the COVID-NET surveillance system website are from cases where the healthcare provider specifically called for laboratory testing for SARS-CoV-2, leading to an under-ascertainment of COVID-19 cases as each provider practices differently. Moreover, all results are provisional as each chart must be reviewed once the patients have a discharge disposition. The inefficient transfer of information is limiting this website’s ability to present a more holistic and true evaluation of the COVID-19 pandemic throughout the country. The difficulty of communicating critical data, like ethnicity/race and underlying conditions, is closely linked to the main issue addressed in this paper, providing context around COVID-19 cases. A system like COVID-NET needs to be established much more widely throughout the US. It is critical that the flow of information from hospitals to organizations, like the CDC and Human Health & Services, is streamlined for policymakers and the public to be aware of the situation in their local area. Despite this issue, the COVID-NET interactive website continues to publish current, weekly hospitalization data stratified by age which can still be very helpful for those looking to make decisions based on hospitalization data.
What factors limit the accuracy of IFR estimations? The following sections explore some of the important determinants and how they affect the IFR estimation. For estimations to be interpreted in the correct context and accurately generalized, the environment and method of analysis of the study must be discussed. Few studies, if any, acknowledge and account for the many elements that shape their results. This is understandable, as there are many factors and they come from different angles. The goal of the following sections is to elucidate the degree to which these factors can affect the IFR. By doing so, we hope to make it clear why they are important to consider and report. 3.1.1 How does the IFR vary based on age, comorbidities, and demographics? It is well known that SARS-CoV-2 has a steep gradient in risk of death when it comes to age, demographics and comorbidities. More specifically, the mortality risk increases for the elderly (age >65), those with underlying conditions and those of lower socioeconomic status. By now, the vulnerability of the above groups in this pandemic is common knowledge; however, less well-known is the degree to which each affects COVID-19 numbers. Below we will discuss each topic and their numbers to help clarify their respective effects on the COVID-19 pandemic. Evidence of how the IFR can change depending on age is also found in the serology-informed article by Ioannidis . Among the study’s lowest IFR estimates, at 0.08%, was Iran , which despite a seroprevalence of 33%, maintained a low IFR due to its very young population, with only slightly more than 1% above the age of 80. IFR estimates for the <70 age group were lower than 0.1% in all but seven locations (Belgium, Wuhan, Italy, Spain, Connecticut, Louisiana, New York), where all seven were hotbed cities of the virus at the time. There was a median of 0.05% across all locations for the <70 age group, significantly lower than the overall median of 0.27%. Additionally, a serology study in Geneva estimated the overall IFR to be 0.64%, yet for ages <50 years was <0.01%, for ages 50–64 years was 0.14%, and for ages >65 years was 5.6%. As of December 2020, the Centers for Disease Control and Prevention (CDC) Pandemic Planning Scenarios sources the Hauser et al. study as its best IFR estimates per age group: 0–19 years = 0.003%, 20–49 years = 0.02%, 50–69 years = 0.5%, 70+ years = 5.4%. The observed trends suggest that the mortality risk increases exponentially with age. The study by Ioannidis looks at the COVID-19 deaths within eight European countries and the US confirms this exponential increase in death rate for both males and females. This increase with age can also be seen in Figure 1 of the article by Guilmoto . When the virus finds ways to attack the older population, the number of deaths in elder populations will far surpass the younger and the resulting overall, age-unadjusted IFR will begin to lose quality. The significant age-related difference in mortality risk is reinforced by the sizeable portion of COVID-19 deaths coming from long-term care facilities (or nursing homes) relative to the total portion of infections the facilities contribute. In association with the International Long-Term Care Policy Network, Comas-Herrera et al. gathered evidence on long-term care facilities as they relate to COVID-19 from 26 countries where official sources made the data available. After considering the many different methods each country has taken in defining a COVID-19 death, a COVID-19 long-term care facility death and long-term care facilities themselves, the study estimates that 46% of all COVID-19 deaths have come from long-term care facilities residents based on data from 21 countries. In the US, long-term care facilities contributed to 41% of the total COVID-19 deaths as of late September 2020. This trend is relatively common in the study across countries with more than 5000 total deaths that range from 39% of deaths in Germany to 80% in Canada, and anomalies to this trend are found only in countries with less than 1000 total deaths. The disproportionate contribution of long-term care facilities to COVID-19 deaths is reasonable considering the fragility of these residents’ lives. The average length of stay in nursing homes is 2 years and people who die in nursing homes die in a median of 5 months . This suggests that the deaths of people in nursing homes largely affects the COVID-19 fatality. In the meta-analysis by Ioannidis , three studies taking place in New York , , show high overall IFR values of 0.4% , 0.68% , and 0.65% . A possible explanation for this high mortality risk would be the decision by the New York governor to allow excess COVID-19 patients to find care in nursing homes. It is not unexpected that people in nursing homes, who are there typically due to poor health conditions, would be more susceptible to this virus. Given their major contribution to the total amount of COVID-19 deaths in most countries, their under-representation throughout the majority of serology-informed studies is cause for concern. Age-specific calculations of the IFR can minimize the effect of neglecting nursing homes, but if seroprevalence in this institutionalized population is higher than in the general population, it could lead to overestimation of the IFR. Already mentioned in this paper was a large study that does not include this group of individuals, the Spain study . This study addresses many of the factors we discuss in this review, yet its inability to include institutionalized individuals may be affecting the accuracy of their results more than they initially perceive. Underlying health conditions, such as cardiovascular disease, hypertension, diabetes, chronic obstructive pulmonary disease, severe asthma, kidney failure, severe liver disease, immunodeficiency and malignancy have been linked to an increased fatality risk when infected with COVID-19. These comorbidities create another at-risk group, in addition to the elderly, that must be treated with caution. The comorbidity factor contributes significantly to the interpretation of deaths that occur in the <65 age group. The age-stratified analysis on COVID-19 mortality risk by Ioannidis in the early pandemic on 11 European countries, Canada, Mexico, India and 13 US states showed a small fraction of total deaths attributable to non-elderly people with no underlying conditions. A range of 4.5% to 11.2%, in European countries and Canada, and 8.3% to 22.7%, in US locations, was identified as the percent of total COVID-19 deaths in people below the age of 65. In Mexico and India, however, non-elderly individuals constitute the majority of the population. A noteworthy result regarding the impact of underlying diseases is that the study showed the proportion of total COVID-19 deaths linked to non-elderly people without underlying conditions ranged from just 0.65% to 3.6%, where data was available (France, Italy, Netherlands, Sweden, Georgia, and New York City). Additionally, these numbers were calculated while considering only cardiovascular disease, hypertension, diabetes and pulmonary disease as comorbidities. While these diseases contribute to the bulk of the <65 years old comorbidity population, studies still leave out other underlying diseases linked to COVID-19 with unknown contributions to this comorbidity population. Many countries and states vary in their definitions of underlying conditions as it pertains to COVID-19. Partitioning COVID-19 data according to the major comorbidities could prove beneficial to the analysis of the reported data, given the significant number of deaths that group contributes to the deaths in the <65 age group. SARS-CoV-2 also disproportionately affects people by socioeconomic status, most notably in urban areas. Yet, the following paragraph will instead evaluate the mortality rate through race and ethnicity rather than socioeconomic status. The reason for this is not to suggest there is a natural vulnerability to the virus based on race. Rather, the reason is because it is well-established in the US that minority groups are disproportionately represented in lower socioeconomic statuses . Additionally, socioeconomic status is not reported as often as race/ethnicity in most studies. Therefore, the association between socioeconomic status and race can be useful when considered appropriately. The antibody survey conducted by the New York government in late April provided seroprevalence estimates of 8.9–9.1% in White populations, 22.5–32.0% in Latino/Hispanic populations, 16.9%–22% in Black populations, and 11.7–14.6% in Asian populations. The APM research lab has independently compiled up-to-date data regarding COVID-19 deaths by race across the US and has identified that Black people, representing 12.4% of the population, have suffered 19.9% of reported COVID-19 deaths. Additionally, compared to the White population, the latest U.S. age-adjusted COVID-19 mortality rate for the Black populations are 3.0 times as high, the Indigenous people are 3.2 times, the Latino populations are 3.0 times, and the Pacific Islanders are 2.3 times. If there is a direct association between these demographic groups and socioeconomic status in a population, as is the case in major cities in the US, then studies can use this demographic measure to assess how these factors affect the mortality rate. Social factors are affecting disadvantaged groups and low-income countries which contributes to anomalies and inaccurate interpretation of the data. With higher rates of underlying conditions, less access to healthcare and more frontline jobs, among other factors, people of lower socioeconomic status are another group to carefully consider in the context of this pandemic. More awareness of this issue can help with public health measures like an increased availability of antibody and RT-PCR testing, increased awareness of disease symptoms, and more strict guidelines on personal protective equipment to help control the spread and mortality of this disease within these groups. 3.1.2 Is the sample population representative of the general population? First and foremost are the uncertainties with the sample population. While some serosurveys are deliberately unrepresentative of the larger population, such as those using blood donors as samples, others that aim for mixed, random sampling within a population can still have variability. When recruiting individuals, certain subpopulations where COVID-19 is particularly widespread, such as among nursing homes, disadvantaged communities, people experiencing homelessness and people in prisons may be under-represented in the studies. The serosurveys do not exclude these groups, rather their method of recruitment inherently makes it difficult for these groups to participate. For example, many studies were household-based, recruiting from outpatient clinics, or contacting participants via Facebook . Institutionalized populations will have a more difficult time accessing these studies as well as disadvantaged communities who do not have regular access to healthcare or technology. Recruiting fewer people from these subgroups may underestimate seroprevalence and overestimate IFR. 3.1.3 How do information delays affect the timestamp of COVID-19 data? Many delays occur over the course of SARS-CoV-2 exposure and infection. Awareness of each delay can ensure that each documented case, seroconversion, hospitalization and death is properly associated with the date it represents. Overall, studies must account for the delay between exposure and symptoms (incubation period), symptom onset and documented infection, exposure and seroconversion (formation of antibodies), symptom onset and death, and finally, death and reporting. The delay between infection and seroconversion is roughly 1 to 3 weeks . The incubation period has a median time of 4–5 days, where 97.5% of people with COVID-19 who show symptoms will do so before 11.5 days after infection . The time between symptom onset and documented infection is roughly 5 days . The delay between symptom onset to death usually falls in the range of 13 to 19 days . The delay between death and reporting is roughly 1 to 8 weeks, where roughly <25% of deaths are reported within the first few weeks and generally 75% are reported by 8 weeks . In summary, it takes roughly 1.5 to 2 weeks for a rise in infections to reflect in documented cases, one to three weeks for a population’s antibodies to represent the seroprevalence, and one month or more for reported deaths to reflect the mortality of past cases. These delays can cause incorrect associations between values of seroprevalence, cases and deaths if not appropriately considered. 3.1.4 What is the accuracy of SARS-CoV-2 testing? There are two types of tests used to test the presence of antibodies in individuals, the lateral flow immunoassay (LFIA) device and the enzyme-linked immunoassay (ELISA). Each test has a different way of analyzing a serum sample for IgG and IgM antibodies against a certain part of a virus, in this case, the SARS-CoV-2 spike protein. As mentioned above, seroconversion, or creating anti-spike protein antibodies, can take one to three weeks. Therefore, a serology-based study will calculate the seroprevalence of the population roughly two weeks before the study date. LFIA devices are used as point-of-care tests, providing results in a matter of 10 min while the ELISA can take hours and requires lab equipment, yet there is a trade-off in quality as the ELISA is typically more sensitive and specific. The sensitivity of a test refers to the likeliness of giving a true positive result. The specificity of a test refers to the likeliness of giving a true negative. The sensitivity of the LFIA and ELISA devices is assessed by calculating the percent positive results against known positive samples confirmed by the RT-PCR test, the gold-standard. 100% in this case is considered perfectly sensitive. The specificity is usually tested against pre-SARS-CoV-2 outbreak samples where 100% negativity of tested samples means perfectly specific. To assess the quality of each test, a report from the National COVID Scientific Advisory Panel tested for SARS-CoV-2 IgM and IgG antibodies using ELISA and 9 different LFIA devices. The ELISA detected IgG in 34/40 PCR-positive samples, a sensitivity of 85% (95%CI 70%–94%), where all 6 false negatives were from samples taken within at least nine days from symptom onset. It detected IgG in 0/50 pre-pandemic controls, a specificity of 100%, and in 31 of 31 positive samples taken greater than 10 days after symptom onset, a sensitivity of 100%. IgM sensitivity was lower at 70%, and all IgG false negatives were also IgM false negatives. This confirms that the accuracy of ELISA tests improves when detecting IgG antibodies in samples taken greater than 10 days after symptom onset. The ELISA OD ratio can often be refined according to the study’s preferences to prioritize either sensitivity or specificity. LFIA devices, on the other hand, ranged from 55%–70% in sensitivity and 95%–100% in specificity. Higher sensitivities for LFIA devices are reported by manufacturers, but the seroepidemiological study in Spain performed their own validation of the LFIA device they used. They reported an IgG sensitivity of 82.1%, an IgM sensitivity of 69.6% and specificities of 100% and 99.0%, respectively. Therefore, exact sensitivities of LFIA devices are variable, but IgG antibodies seem to be more reliable than IgM. The lower sensitivity of LFIA devices may result in unreliable and insufficient screening of SARS-CoV-2 infection. The National COVID Scientific Advisory Panel study considers the best-case scenario for an LFIA test to be 70% sensitivity and 98% specificity. Even if the sensitivity of the device were to improve without compromising the specificity, after 1000 tests there would be roughly 19 false positive documented infections. In a population of 5% seroprevalence, this would mean 35% of the tests are wrong. As the seroprevalence increases to 20%, 10% of results would be wrong, and at 50% seroprevalence, 3% would be wrong. This is concerning given the range of seroprevalence estimates in the meta-analysis study by Ioannidis , where only 9 of 36 studies recorded a seroprevalence ≥ 10% and only 4 were ≥ 15%. Despite this apparent flaw in the LFIA devices, the point-of-care and ELISA tests used in the Spain study still recorded similar seroprevalence estimates, 5.0% (95% CI 4.7–5.4) and 4.6% (4.3–5.0), respectively. This suggests that for large serology-informed studies, such as the one in Spain, the LFIA test could be useful as it makes for greater uptake, lower cost and easier implementation. There is more to call into question regarding testing individuals for SARS-CoV-2 infection using the reverse transcriptase–polymerase chain reaction (RT-PCR) swab test. This test uses swabs to take a sample of the subject’s upper respiratory tract and, if SARS-CoV-2 RNA is present, will use the RT-PCR technique to replicate the RNA to detectable levels. While it would be rare to see a false-positive RT-PCR test excluding instances of cross-contamination, false-negatives can occur due to poor quality or timing of the test. A study on the temporal dynamics of viral shedding and transmissibility of COVID-19 showed that viral loads in the upper respiratory tract peak at and soon after symptom onset, then decline quickly within 7 days until they reach the detection limit at around 21 days. Infectiousness, however, may decline significantly after 8–10 days of symptoms, as live virus could no longer be cultured in a study by Wölfel, et al. . Therefore, there may be a significant amount of time where individuals test positive for RT-PCR tests despite no longer being infectious to others. 3.1.5 What is known about the IgG and IgM antibody kinetics in Humans? There may also be significant differences in post-infection antibody kinetics between asymptomatic, mild and severe infections. In a clinical and immunological assessment of 37 asymptomatic and 37 symptomatic SARS-CoV-2 infections , the study found significant differences in IgM detection, where 62.2% asymptomatic were positive and 78.4% of symptomatic individuals were positive. Additionally, whereas 81.1% and 83.8% of asymptomatic and symptomatic individuals, respectively, tested positive for IgG 3–4 weeks after exposure, only 40.0% of asymptomatic and only 12.9% of symptomatic individuals became seronegative for IgG in the early convalescent phase, 8 weeks after being discharged from the hospital. Therefore, to maximize the accuracy of serosurveys, conclusions made from the data must be associated with the period of time they most accurately represent. 3.1.6 How are COVID-19 deaths defined? Across nations and states, the answer to the question “What is a COVID-19 death and what isn’t?” is serious and important, but also inconsistent. In general, there are three methods of defining and quantifying COVID-19 deaths. First is the method of recording a death as due to COVID-19 only for those who test positive, either before or after death. This method could be uniformly implemented if every person could get tested, however, there are many countries and states that are unable to do so. This results in deaths from exacerbation of chronic conditions due to COVID-19 and deaths not counted due to lack of testing. Therefore, this method can miss people with atypical symptoms and deaths not linked to the pandemic, such as limited access to health care services due to overcrowded hospitals. It could also incorrectly count those dying from unrelated causes, such as a car crash, after testing positive. Second is the method of counting deaths of people who test positive and those who are not tested but suspected of having COVID-19. Several countries, such as Belgium, Canada, England, France, Ireland, Scotland, and some regions of Spain, have used this approach . With this method comes a risk of incorrectly associated deaths to COVID-19, but it may help in providing timely data as to the scale of the pandemic’s mortality without requiring COVID-19 tests for every hospitalized individual. Unsurprisingly, these countries report higher proportions of COVID-19 deaths . The third method of quantifying COVID-19 deaths is by measuring excess deaths. This method is best for quantifying the number of deaths both directly and indirectly associated with COVID-19, capturing the full effect the pandemic has had on the public’s health. This method works by comparing the total amount of deaths that are over the expected number of deaths based on the past five years. This method will be reliable, but not for months or possibly years due to the time it takes to officially process death certificates. There may also be variability in excess deaths caused by confounding factors, such as a bad flu season, less driving accidents or decreased utilization of healthcare during the pandemic. It is important to acknowledge the different ways that COVID-19 deaths are recorded to recognize the possible underestimation, by the first method, and overestimation, by the second method, of IFR values. The third method will inherently underestimate the recent mortality rate, as deaths are being processed and documented. Over time, the overall impact of the pandemic on deaths can be evaluated, yet one will have to consider the many confounding factors in play to estimate the mortality rate of the disease itself. 3.1.7 What is the state of hospital COVID-19 cases, deaths and patient data? COVID-19-Associated Hospitalization Surveillance Network (COVID-NET) is a population-based surveillance system run by the CDC that collects data on laboratory-confirmed COVID-19-associated hospitalizations among children and adults through a network of over 250 acute-care hospitals in 14 states, covering 10% of the entire US population. This surveillance system acquires information about each case’s age group, sex, ethnicity and underlying health conditions. Cases are identified in COVID-NET if they test positive for SARS-CoV-2 and are hospitalized within 14 days of the positive test, and the data is collected using a standardized method of reporting by trained surveillance officers. Therefore, this database has the potential to present patients in a complete context. It is a prime example of how hospital data can be used to inform the public on the risk of COVID-19 in their area, providing data that gives both specific and generalized data points. For example, it can give the weekly hospitalization rate by age and can also give the proportion of cases resulting in death or release by race/ethnicity. COVID-NET also shows that 89.3% of all hospitalizations are in patients with some underlying health condition, the most common being hypertension 58.9% . While promising, there are limitations to the application of this data. First, the network was able to perform a detailed analysis of comorbidity and ethnicity only for hospitalizations in March due to the large amount of time needed to process this data. There were 1,482 hospitalizations in their system for that month, and just 180 (12.1%) contained data regarding comorbidities. The only cases reported on the COVID-NET surveillance system website are from cases where the healthcare provider specifically called for laboratory testing for SARS-CoV-2, leading to an under-ascertainment of COVID-19 cases as each provider practices differently. Moreover, all results are provisional as each chart must be reviewed once the patients have a discharge disposition. The inefficient transfer of information is limiting this website’s ability to present a more holistic and true evaluation of the COVID-19 pandemic throughout the country. The difficulty of communicating critical data, like ethnicity/race and underlying conditions, is closely linked to the main issue addressed in this paper, providing context around COVID-19 cases. A system like COVID-NET needs to be established much more widely throughout the US. It is critical that the flow of information from hospitals to organizations, like the CDC and Human Health & Services, is streamlined for policymakers and the public to be aware of the situation in their local area. Despite this issue, the COVID-NET interactive website continues to publish current, weekly hospitalization data stratified by age which can still be very helpful for those looking to make decisions based on hospitalization data.
How does the IFR vary based on age, comorbidities, and demographics? It is well known that SARS-CoV-2 has a steep gradient in risk of death when it comes to age, demographics and comorbidities. More specifically, the mortality risk increases for the elderly (age >65), those with underlying conditions and those of lower socioeconomic status. By now, the vulnerability of the above groups in this pandemic is common knowledge; however, less well-known is the degree to which each affects COVID-19 numbers. Below we will discuss each topic and their numbers to help clarify their respective effects on the COVID-19 pandemic. Evidence of how the IFR can change depending on age is also found in the serology-informed article by Ioannidis . Among the study’s lowest IFR estimates, at 0.08%, was Iran , which despite a seroprevalence of 33%, maintained a low IFR due to its very young population, with only slightly more than 1% above the age of 80. IFR estimates for the <70 age group were lower than 0.1% in all but seven locations (Belgium, Wuhan, Italy, Spain, Connecticut, Louisiana, New York), where all seven were hotbed cities of the virus at the time. There was a median of 0.05% across all locations for the <70 age group, significantly lower than the overall median of 0.27%. Additionally, a serology study in Geneva estimated the overall IFR to be 0.64%, yet for ages <50 years was <0.01%, for ages 50–64 years was 0.14%, and for ages >65 years was 5.6%. As of December 2020, the Centers for Disease Control and Prevention (CDC) Pandemic Planning Scenarios sources the Hauser et al. study as its best IFR estimates per age group: 0–19 years = 0.003%, 20–49 years = 0.02%, 50–69 years = 0.5%, 70+ years = 5.4%. The observed trends suggest that the mortality risk increases exponentially with age. The study by Ioannidis looks at the COVID-19 deaths within eight European countries and the US confirms this exponential increase in death rate for both males and females. This increase with age can also be seen in Figure 1 of the article by Guilmoto . When the virus finds ways to attack the older population, the number of deaths in elder populations will far surpass the younger and the resulting overall, age-unadjusted IFR will begin to lose quality. The significant age-related difference in mortality risk is reinforced by the sizeable portion of COVID-19 deaths coming from long-term care facilities (or nursing homes) relative to the total portion of infections the facilities contribute. In association with the International Long-Term Care Policy Network, Comas-Herrera et al. gathered evidence on long-term care facilities as they relate to COVID-19 from 26 countries where official sources made the data available. After considering the many different methods each country has taken in defining a COVID-19 death, a COVID-19 long-term care facility death and long-term care facilities themselves, the study estimates that 46% of all COVID-19 deaths have come from long-term care facilities residents based on data from 21 countries. In the US, long-term care facilities contributed to 41% of the total COVID-19 deaths as of late September 2020. This trend is relatively common in the study across countries with more than 5000 total deaths that range from 39% of deaths in Germany to 80% in Canada, and anomalies to this trend are found only in countries with less than 1000 total deaths. The disproportionate contribution of long-term care facilities to COVID-19 deaths is reasonable considering the fragility of these residents’ lives. The average length of stay in nursing homes is 2 years and people who die in nursing homes die in a median of 5 months . This suggests that the deaths of people in nursing homes largely affects the COVID-19 fatality. In the meta-analysis by Ioannidis , three studies taking place in New York , , show high overall IFR values of 0.4% , 0.68% , and 0.65% . A possible explanation for this high mortality risk would be the decision by the New York governor to allow excess COVID-19 patients to find care in nursing homes. It is not unexpected that people in nursing homes, who are there typically due to poor health conditions, would be more susceptible to this virus. Given their major contribution to the total amount of COVID-19 deaths in most countries, their under-representation throughout the majority of serology-informed studies is cause for concern. Age-specific calculations of the IFR can minimize the effect of neglecting nursing homes, but if seroprevalence in this institutionalized population is higher than in the general population, it could lead to overestimation of the IFR. Already mentioned in this paper was a large study that does not include this group of individuals, the Spain study . This study addresses many of the factors we discuss in this review, yet its inability to include institutionalized individuals may be affecting the accuracy of their results more than they initially perceive. Underlying health conditions, such as cardiovascular disease, hypertension, diabetes, chronic obstructive pulmonary disease, severe asthma, kidney failure, severe liver disease, immunodeficiency and malignancy have been linked to an increased fatality risk when infected with COVID-19. These comorbidities create another at-risk group, in addition to the elderly, that must be treated with caution. The comorbidity factor contributes significantly to the interpretation of deaths that occur in the <65 age group. The age-stratified analysis on COVID-19 mortality risk by Ioannidis in the early pandemic on 11 European countries, Canada, Mexico, India and 13 US states showed a small fraction of total deaths attributable to non-elderly people with no underlying conditions. A range of 4.5% to 11.2%, in European countries and Canada, and 8.3% to 22.7%, in US locations, was identified as the percent of total COVID-19 deaths in people below the age of 65. In Mexico and India, however, non-elderly individuals constitute the majority of the population. A noteworthy result regarding the impact of underlying diseases is that the study showed the proportion of total COVID-19 deaths linked to non-elderly people without underlying conditions ranged from just 0.65% to 3.6%, where data was available (France, Italy, Netherlands, Sweden, Georgia, and New York City). Additionally, these numbers were calculated while considering only cardiovascular disease, hypertension, diabetes and pulmonary disease as comorbidities. While these diseases contribute to the bulk of the <65 years old comorbidity population, studies still leave out other underlying diseases linked to COVID-19 with unknown contributions to this comorbidity population. Many countries and states vary in their definitions of underlying conditions as it pertains to COVID-19. Partitioning COVID-19 data according to the major comorbidities could prove beneficial to the analysis of the reported data, given the significant number of deaths that group contributes to the deaths in the <65 age group. SARS-CoV-2 also disproportionately affects people by socioeconomic status, most notably in urban areas. Yet, the following paragraph will instead evaluate the mortality rate through race and ethnicity rather than socioeconomic status. The reason for this is not to suggest there is a natural vulnerability to the virus based on race. Rather, the reason is because it is well-established in the US that minority groups are disproportionately represented in lower socioeconomic statuses . Additionally, socioeconomic status is not reported as often as race/ethnicity in most studies. Therefore, the association between socioeconomic status and race can be useful when considered appropriately. The antibody survey conducted by the New York government in late April provided seroprevalence estimates of 8.9–9.1% in White populations, 22.5–32.0% in Latino/Hispanic populations, 16.9%–22% in Black populations, and 11.7–14.6% in Asian populations. The APM research lab has independently compiled up-to-date data regarding COVID-19 deaths by race across the US and has identified that Black people, representing 12.4% of the population, have suffered 19.9% of reported COVID-19 deaths. Additionally, compared to the White population, the latest U.S. age-adjusted COVID-19 mortality rate for the Black populations are 3.0 times as high, the Indigenous people are 3.2 times, the Latino populations are 3.0 times, and the Pacific Islanders are 2.3 times. If there is a direct association between these demographic groups and socioeconomic status in a population, as is the case in major cities in the US, then studies can use this demographic measure to assess how these factors affect the mortality rate. Social factors are affecting disadvantaged groups and low-income countries which contributes to anomalies and inaccurate interpretation of the data. With higher rates of underlying conditions, less access to healthcare and more frontline jobs, among other factors, people of lower socioeconomic status are another group to carefully consider in the context of this pandemic. More awareness of this issue can help with public health measures like an increased availability of antibody and RT-PCR testing, increased awareness of disease symptoms, and more strict guidelines on personal protective equipment to help control the spread and mortality of this disease within these groups.
Is the sample population representative of the general population? First and foremost are the uncertainties with the sample population. While some serosurveys are deliberately unrepresentative of the larger population, such as those using blood donors as samples, others that aim for mixed, random sampling within a population can still have variability. When recruiting individuals, certain subpopulations where COVID-19 is particularly widespread, such as among nursing homes, disadvantaged communities, people experiencing homelessness and people in prisons may be under-represented in the studies. The serosurveys do not exclude these groups, rather their method of recruitment inherently makes it difficult for these groups to participate. For example, many studies were household-based, recruiting from outpatient clinics, or contacting participants via Facebook . Institutionalized populations will have a more difficult time accessing these studies as well as disadvantaged communities who do not have regular access to healthcare or technology. Recruiting fewer people from these subgroups may underestimate seroprevalence and overestimate IFR.
How do information delays affect the timestamp of COVID-19 data? Many delays occur over the course of SARS-CoV-2 exposure and infection. Awareness of each delay can ensure that each documented case, seroconversion, hospitalization and death is properly associated with the date it represents. Overall, studies must account for the delay between exposure and symptoms (incubation period), symptom onset and documented infection, exposure and seroconversion (formation of antibodies), symptom onset and death, and finally, death and reporting. The delay between infection and seroconversion is roughly 1 to 3 weeks . The incubation period has a median time of 4–5 days, where 97.5% of people with COVID-19 who show symptoms will do so before 11.5 days after infection . The time between symptom onset and documented infection is roughly 5 days . The delay between symptom onset to death usually falls in the range of 13 to 19 days . The delay between death and reporting is roughly 1 to 8 weeks, where roughly <25% of deaths are reported within the first few weeks and generally 75% are reported by 8 weeks . In summary, it takes roughly 1.5 to 2 weeks for a rise in infections to reflect in documented cases, one to three weeks for a population’s antibodies to represent the seroprevalence, and one month or more for reported deaths to reflect the mortality of past cases. These delays can cause incorrect associations between values of seroprevalence, cases and deaths if not appropriately considered.
What is the accuracy of SARS-CoV-2 testing? There are two types of tests used to test the presence of antibodies in individuals, the lateral flow immunoassay (LFIA) device and the enzyme-linked immunoassay (ELISA). Each test has a different way of analyzing a serum sample for IgG and IgM antibodies against a certain part of a virus, in this case, the SARS-CoV-2 spike protein. As mentioned above, seroconversion, or creating anti-spike protein antibodies, can take one to three weeks. Therefore, a serology-based study will calculate the seroprevalence of the population roughly two weeks before the study date. LFIA devices are used as point-of-care tests, providing results in a matter of 10 min while the ELISA can take hours and requires lab equipment, yet there is a trade-off in quality as the ELISA is typically more sensitive and specific. The sensitivity of a test refers to the likeliness of giving a true positive result. The specificity of a test refers to the likeliness of giving a true negative. The sensitivity of the LFIA and ELISA devices is assessed by calculating the percent positive results against known positive samples confirmed by the RT-PCR test, the gold-standard. 100% in this case is considered perfectly sensitive. The specificity is usually tested against pre-SARS-CoV-2 outbreak samples where 100% negativity of tested samples means perfectly specific. To assess the quality of each test, a report from the National COVID Scientific Advisory Panel tested for SARS-CoV-2 IgM and IgG antibodies using ELISA and 9 different LFIA devices. The ELISA detected IgG in 34/40 PCR-positive samples, a sensitivity of 85% (95%CI 70%–94%), where all 6 false negatives were from samples taken within at least nine days from symptom onset. It detected IgG in 0/50 pre-pandemic controls, a specificity of 100%, and in 31 of 31 positive samples taken greater than 10 days after symptom onset, a sensitivity of 100%. IgM sensitivity was lower at 70%, and all IgG false negatives were also IgM false negatives. This confirms that the accuracy of ELISA tests improves when detecting IgG antibodies in samples taken greater than 10 days after symptom onset. The ELISA OD ratio can often be refined according to the study’s preferences to prioritize either sensitivity or specificity. LFIA devices, on the other hand, ranged from 55%–70% in sensitivity and 95%–100% in specificity. Higher sensitivities for LFIA devices are reported by manufacturers, but the seroepidemiological study in Spain performed their own validation of the LFIA device they used. They reported an IgG sensitivity of 82.1%, an IgM sensitivity of 69.6% and specificities of 100% and 99.0%, respectively. Therefore, exact sensitivities of LFIA devices are variable, but IgG antibodies seem to be more reliable than IgM. The lower sensitivity of LFIA devices may result in unreliable and insufficient screening of SARS-CoV-2 infection. The National COVID Scientific Advisory Panel study considers the best-case scenario for an LFIA test to be 70% sensitivity and 98% specificity. Even if the sensitivity of the device were to improve without compromising the specificity, after 1000 tests there would be roughly 19 false positive documented infections. In a population of 5% seroprevalence, this would mean 35% of the tests are wrong. As the seroprevalence increases to 20%, 10% of results would be wrong, and at 50% seroprevalence, 3% would be wrong. This is concerning given the range of seroprevalence estimates in the meta-analysis study by Ioannidis , where only 9 of 36 studies recorded a seroprevalence ≥ 10% and only 4 were ≥ 15%. Despite this apparent flaw in the LFIA devices, the point-of-care and ELISA tests used in the Spain study still recorded similar seroprevalence estimates, 5.0% (95% CI 4.7–5.4) and 4.6% (4.3–5.0), respectively. This suggests that for large serology-informed studies, such as the one in Spain, the LFIA test could be useful as it makes for greater uptake, lower cost and easier implementation. There is more to call into question regarding testing individuals for SARS-CoV-2 infection using the reverse transcriptase–polymerase chain reaction (RT-PCR) swab test. This test uses swabs to take a sample of the subject’s upper respiratory tract and, if SARS-CoV-2 RNA is present, will use the RT-PCR technique to replicate the RNA to detectable levels. While it would be rare to see a false-positive RT-PCR test excluding instances of cross-contamination, false-negatives can occur due to poor quality or timing of the test. A study on the temporal dynamics of viral shedding and transmissibility of COVID-19 showed that viral loads in the upper respiratory tract peak at and soon after symptom onset, then decline quickly within 7 days until they reach the detection limit at around 21 days. Infectiousness, however, may decline significantly after 8–10 days of symptoms, as live virus could no longer be cultured in a study by Wölfel, et al. . Therefore, there may be a significant amount of time where individuals test positive for RT-PCR tests despite no longer being infectious to others.
What is known about the IgG and IgM antibody kinetics in Humans? There may also be significant differences in post-infection antibody kinetics between asymptomatic, mild and severe infections. In a clinical and immunological assessment of 37 asymptomatic and 37 symptomatic SARS-CoV-2 infections , the study found significant differences in IgM detection, where 62.2% asymptomatic were positive and 78.4% of symptomatic individuals were positive. Additionally, whereas 81.1% and 83.8% of asymptomatic and symptomatic individuals, respectively, tested positive for IgG 3–4 weeks after exposure, only 40.0% of asymptomatic and only 12.9% of symptomatic individuals became seronegative for IgG in the early convalescent phase, 8 weeks after being discharged from the hospital. Therefore, to maximize the accuracy of serosurveys, conclusions made from the data must be associated with the period of time they most accurately represent.
How are COVID-19 deaths defined? Across nations and states, the answer to the question “What is a COVID-19 death and what isn’t?” is serious and important, but also inconsistent. In general, there are three methods of defining and quantifying COVID-19 deaths. First is the method of recording a death as due to COVID-19 only for those who test positive, either before or after death. This method could be uniformly implemented if every person could get tested, however, there are many countries and states that are unable to do so. This results in deaths from exacerbation of chronic conditions due to COVID-19 and deaths not counted due to lack of testing. Therefore, this method can miss people with atypical symptoms and deaths not linked to the pandemic, such as limited access to health care services due to overcrowded hospitals. It could also incorrectly count those dying from unrelated causes, such as a car crash, after testing positive. Second is the method of counting deaths of people who test positive and those who are not tested but suspected of having COVID-19. Several countries, such as Belgium, Canada, England, France, Ireland, Scotland, and some regions of Spain, have used this approach . With this method comes a risk of incorrectly associated deaths to COVID-19, but it may help in providing timely data as to the scale of the pandemic’s mortality without requiring COVID-19 tests for every hospitalized individual. Unsurprisingly, these countries report higher proportions of COVID-19 deaths . The third method of quantifying COVID-19 deaths is by measuring excess deaths. This method is best for quantifying the number of deaths both directly and indirectly associated with COVID-19, capturing the full effect the pandemic has had on the public’s health. This method works by comparing the total amount of deaths that are over the expected number of deaths based on the past five years. This method will be reliable, but not for months or possibly years due to the time it takes to officially process death certificates. There may also be variability in excess deaths caused by confounding factors, such as a bad flu season, less driving accidents or decreased utilization of healthcare during the pandemic. It is important to acknowledge the different ways that COVID-19 deaths are recorded to recognize the possible underestimation, by the first method, and overestimation, by the second method, of IFR values. The third method will inherently underestimate the recent mortality rate, as deaths are being processed and documented. Over time, the overall impact of the pandemic on deaths can be evaluated, yet one will have to consider the many confounding factors in play to estimate the mortality rate of the disease itself.
What is the state of hospital COVID-19 cases, deaths and patient data? COVID-19-Associated Hospitalization Surveillance Network (COVID-NET) is a population-based surveillance system run by the CDC that collects data on laboratory-confirmed COVID-19-associated hospitalizations among children and adults through a network of over 250 acute-care hospitals in 14 states, covering 10% of the entire US population. This surveillance system acquires information about each case’s age group, sex, ethnicity and underlying health conditions. Cases are identified in COVID-NET if they test positive for SARS-CoV-2 and are hospitalized within 14 days of the positive test, and the data is collected using a standardized method of reporting by trained surveillance officers. Therefore, this database has the potential to present patients in a complete context. It is a prime example of how hospital data can be used to inform the public on the risk of COVID-19 in their area, providing data that gives both specific and generalized data points. For example, it can give the weekly hospitalization rate by age and can also give the proportion of cases resulting in death or release by race/ethnicity. COVID-NET also shows that 89.3% of all hospitalizations are in patients with some underlying health condition, the most common being hypertension 58.9% . While promising, there are limitations to the application of this data. First, the network was able to perform a detailed analysis of comorbidity and ethnicity only for hospitalizations in March due to the large amount of time needed to process this data. There were 1,482 hospitalizations in their system for that month, and just 180 (12.1%) contained data regarding comorbidities. The only cases reported on the COVID-NET surveillance system website are from cases where the healthcare provider specifically called for laboratory testing for SARS-CoV-2, leading to an under-ascertainment of COVID-19 cases as each provider practices differently. Moreover, all results are provisional as each chart must be reviewed once the patients have a discharge disposition. The inefficient transfer of information is limiting this website’s ability to present a more holistic and true evaluation of the COVID-19 pandemic throughout the country. The difficulty of communicating critical data, like ethnicity/race and underlying conditions, is closely linked to the main issue addressed in this paper, providing context around COVID-19 cases. A system like COVID-NET needs to be established much more widely throughout the US. It is critical that the flow of information from hospitals to organizations, like the CDC and Human Health & Services, is streamlined for policymakers and the public to be aware of the situation in their local area. Despite this issue, the COVID-NET interactive website continues to publish current, weekly hospitalization data stratified by age which can still be very helpful for those looking to make decisions based on hospitalization data.
Using metrology principles for reporting epidemiological parameters In this review we looked at the variety of factors affecting COVID-19 fatality rate estimates. To improve our understanding, modeling and decision-making regarding the COVID-19 pandemic or any other pandemic, epidemiological studies require standardization for reporting data. It is essential to develop a definition of minimum information (metadata) needed to correctly describe fatality rates, but also all other critical epidemiological parameters. There are many factors associated specifically with the COVID-19 fatality rate and generally to seroepidemiological studies that must be considered for a proper contextual understanding of published data. While these factors and limitations are well-known throughout the epidemiological field, there is a habit of not including them in published work. By including this metadata, epidemiologists will better understand the provenance of parameters, and how the results of one study in a specific setting can be generalized and applied more broadly to other situations. It will allow public health officials to make more substantiated and knowledgeable decisions. At the same time, it will improve communication between epidemiologists investigating diseases and possibly reveal novel insights about previously unexplainable differences between models and studies. This manuscript aims at sparking a conversation that considers how to create standardized guidelines for reporting epidemiological parameters in the literature. We believe this can be accomplished by applying metrology principles, which help experimentalists thoroughly dissect each aspect of their study to find where uncertainties can lie. In turn, this dissection not only leads to increased awareness of these factors and limitations but can help people understand why they are so critical to include. 4.1 The COVID-19 pandemic management showcases the urgent need for standardization This standardization dilemma has also manifested in the disparate handling of COVID-19 across the US. There are inconsistent recommendations for social-distancing and business and institution closings across the US. While some situations require more or less action than others, disparate messaging can make it extremely difficult to coordinate a unified response when one is needed. A study conducted by the organization Resolve to Save Lives demonstrates that this issue extends to their reporting of COVID-19 data. The study reviews all 50 US states’ COVID-19 data dashboards to assess their consistency and robustness. Uniform indicators across all 50 states’ data regarding COVID-19 spread, mortality and response is critical not only to ensure accountability and risk of this pandemic but also to ensure the data can be utilized accurately and to its fullest extent. The review discovered a lack of consistency that is startling across all domains of critical pandemic-related data, except for deaths. Syndromic surveillance, or the reporting of COVID-like illness and influenza-like illness, in patients who present themselves to healthcare facilities was reported in only 37% of states for COVID-like illness and 18% for influenza-like illness. The immediate reporting of these numbers’ new daily counts is critical for predicting potential upcoming virus spread. The type of COVID-19 case indicators, such as new confirmed, probable, and per-capita rates, are not clearly defined in 40% of states, apart from all the states displaying either new or cumulative cases. Only 64% of states report data for nursing homes, correctional facilities, homeless shelters and other facility-specific data. The number of tests performed is reported in >90% of states, but only 75% report PCR test positivity and 5% report the average time from symptom onset to PCR test result, which is important to be no more than two days as this is the period of peak infectivity. Slightly more than 80% of states report COVID-19-specific hospitalizations but vary between reporting cumulative or daily new, less than 50% report intensive care unit bed admissions. Also, they present numbers in counts, rather than per-capita, which does not allow for comparison of the data with other locations. Only 15% of states report occupational healthcare worker infections. Finally, only 8 states report data on the source of exposure for cases, which reflects on the region’s ability to control COVID-19 via awareness of where outbreaks occur. Aside from the type of data reported, there are significant variations in the display of data, performance targets and what data is considered important. For example, while 92% of states report COVID-19 cases, some states report the case date as the date of specimen collection, some the date of illness onset, and some the date reported. Some states include data for both nursing home staff and residents, while others report only for residents. Among the >90% of states reporting testing, they vary in reporting either cumulative or weekly numbers and the type of test being reported. Some report PCR positivity for the day, while others require users to calculate it themselves. While all but three states include data on demographics, they vary greatly in the type of information reported (cases, deaths, hospitalizations) and the type of stratification (age, sex, race/ethnicity, or a combination). Granted, establishing websites to inform the public and policymakers is unprecedented, but there are major flaws in the way it was carried out. The state-to-state dissimilarities considerably hinder the ability to compare the situation in one state with another. It can result in the misuse and misunderstanding of the data. It can be the cause of inconsistent public health safety guidelines. It can cost the lives of people who are affected by the absence of demographic data and blindness to the risk of the disease in their area. 4.2 Epidemiology and public health can learn an important lesson from other fields in the biomedical sciences The rigor and reproducibility crisis in the biomedical sciences has moved scientists across different fields to establish and develop guidelines for reporting data and methods with rigor and robustness. In enzymology, there are often key measurements, reagents, temporal data, and other critical information left out leading to irreproducible studies and unreliable results. The lack of consensus within the community results in inconsistent reporting of data throughout studies. Experiments are conducted in different environments and in a variety of ways without consideration of the weight each variation carries. This has led to discrepancies in the reporting of physical constants, leading to irreproducible scientific findings. In an effort to gain control, the Standards for Reporting Enzymology Data guidelines have been created to inform enzymologists on what data is critical to report for their experiments. These guidelines ensure that the identity of the enzyme, preparation, storage conditions, assay conditions, enzyme activity, methodology and any other critical information is clearly stated in order to standardize studies in their field. Groups of experts in other fields of biology have come together in an attempt to resolve this growing issue. To encourage the reporting of critical information, these groups established guidelines such as the Minimum Information About Microarray Experiment (MIAME) and Minimum Information about a Biomedical or Biological Investigation (MIBBI). According to the metrologists in Plant et al. , establishing consensus requirements such as these is the first step to bringing back validity and reproducibility to published results in their respective scientific fields. Plant et al. discuss three core aspects that are vital to identifying confounding variables and assessing uncertainty within a study. First, characterizing the experimental system, such as specifying instrumentation, characteristics of the subjects and computational tools, will make results robust. Second, immutable reference materials and reference data, like including calibration of instruments and type of software, will make results reproducible and comparable between laboratories. An example is the OD level used in ELISA tests or the type of specimen used in the validation of COVID-19 test quality. Third, valid interpretation of data given the known truth and limitations of the experiment. 4.3 What are the next steps for epidemiology and public health? There has been a promising development in the standardization of reporting figures, context, and terminology on the CDC website . On this page, the CDC outlines how diagnostic and screening testing sites must be accredited, how they report their data and to which organizations (regional, state, and federal public health departments), what data elements should be reported (age, race, sex, test ordered, date, etc.) and the standard terminology that should be used. However, there are many other areas where more work is required. For example, in the case of fatality rates, we suggest the reporting of seven categories of metadata (see, ) in studies estimating the seroprevalence and/or the IFR of a population. Included are topics concerning both seroepidemiology and modeling, which have the potential to cause significant uncertainties and variations in data, as we have discussed. Again, these suggestions should be considered as a starting point for experts in the field to ensure a complete picture of how each COVID-19 epidemiological study is painted. While this table can be used by epidemiologists in their studies, the Resolve to Save Lives study similarly includes a table of 15 essential COVID-19 indicators that should be reported by each county, state, and country and example data dashboards that can be used more generally by serologists, policy makers and government officials. We also recommend looking at Table 1 of another study by Plant et al. that provides general guidelines to kick start the conversation of identifying any other uncertainties within serology studies that have yet to be identified. This table was created by summarizing the sources of uncertainty as described by the Guide to Expression of Uncertainty in Measurement. The next critical step is setting in motion a discussion within the epidemiological field to standardize the measuring and reporting of data. To do this effectively, we suggest an international committee of epidemiological experts to come together and establish minimum reporting guidelines in epidemiology and public health. This group could be coordinated by both the CDC and the National Institute of Standards and Technology (NIST), which are in the position of guiding the initiative effectively. The previously mentioned page on the CDC website does well to consider our concerns as they relate to lab-reported data, however, these guidelines could also be extended to all serology studies. It will be very beneficial to establish an international committee analogous to or within The Bureau International des Poids et Mesures. This is an international body that aggregates all state members of NIST and other countries around the world to help more broadly establish what needs to be standardized within certain fields of science and what fundamental definitions of quantities people should adopt. Without an international committee and encouragement by higher institutions, it will be difficult, if not impossible, to establish global guidelines to be prepared for the next pandemic. In addition to the epidemiology field, the expectation to standardize methods of reporting COVID-19 related data will hopefully be implemented in all government health agencies across the United States, as this is the most direct way to improve the quality of data presented to the public and policymakers. While much of this data may not be immediately available in all states, instituting a set of indicators to be reported, such as those in the Resolve to Save Lives study , will begin this critical process. The benefits of investing resources into properly gathering this data will certainly outweigh the costs to our economy, social lives, and public health. While we have used the current pandemic as our case for standardizing the methods of data collection and reporting, we hope that the concepts presented in this paper will become well established in the epidemiological communities. This issue is easily overlooked and is more prevalent than one might think. Fixing the problem for the current and future pandemics begins with increasing the awareness of how one’s research works with the research of others in the same field. Establishing this perspective will serve to reveal the many connections between the extensive amount of research published on any singular topic and improve our ability to utilize each and every finding. Our goal is to add this perspective on experiment design and data reporting to the arsenal of the epidemiological scientist. We believe that doing so will help further develop an already robust field and enhance the real-time impact of epidemiological research on public health.
The COVID-19 pandemic management showcases the urgent need for standardization This standardization dilemma has also manifested in the disparate handling of COVID-19 across the US. There are inconsistent recommendations for social-distancing and business and institution closings across the US. While some situations require more or less action than others, disparate messaging can make it extremely difficult to coordinate a unified response when one is needed. A study conducted by the organization Resolve to Save Lives demonstrates that this issue extends to their reporting of COVID-19 data. The study reviews all 50 US states’ COVID-19 data dashboards to assess their consistency and robustness. Uniform indicators across all 50 states’ data regarding COVID-19 spread, mortality and response is critical not only to ensure accountability and risk of this pandemic but also to ensure the data can be utilized accurately and to its fullest extent. The review discovered a lack of consistency that is startling across all domains of critical pandemic-related data, except for deaths. Syndromic surveillance, or the reporting of COVID-like illness and influenza-like illness, in patients who present themselves to healthcare facilities was reported in only 37% of states for COVID-like illness and 18% for influenza-like illness. The immediate reporting of these numbers’ new daily counts is critical for predicting potential upcoming virus spread. The type of COVID-19 case indicators, such as new confirmed, probable, and per-capita rates, are not clearly defined in 40% of states, apart from all the states displaying either new or cumulative cases. Only 64% of states report data for nursing homes, correctional facilities, homeless shelters and other facility-specific data. The number of tests performed is reported in >90% of states, but only 75% report PCR test positivity and 5% report the average time from symptom onset to PCR test result, which is important to be no more than two days as this is the period of peak infectivity. Slightly more than 80% of states report COVID-19-specific hospitalizations but vary between reporting cumulative or daily new, less than 50% report intensive care unit bed admissions. Also, they present numbers in counts, rather than per-capita, which does not allow for comparison of the data with other locations. Only 15% of states report occupational healthcare worker infections. Finally, only 8 states report data on the source of exposure for cases, which reflects on the region’s ability to control COVID-19 via awareness of where outbreaks occur. Aside from the type of data reported, there are significant variations in the display of data, performance targets and what data is considered important. For example, while 92% of states report COVID-19 cases, some states report the case date as the date of specimen collection, some the date of illness onset, and some the date reported. Some states include data for both nursing home staff and residents, while others report only for residents. Among the >90% of states reporting testing, they vary in reporting either cumulative or weekly numbers and the type of test being reported. Some report PCR positivity for the day, while others require users to calculate it themselves. While all but three states include data on demographics, they vary greatly in the type of information reported (cases, deaths, hospitalizations) and the type of stratification (age, sex, race/ethnicity, or a combination). Granted, establishing websites to inform the public and policymakers is unprecedented, but there are major flaws in the way it was carried out. The state-to-state dissimilarities considerably hinder the ability to compare the situation in one state with another. It can result in the misuse and misunderstanding of the data. It can be the cause of inconsistent public health safety guidelines. It can cost the lives of people who are affected by the absence of demographic data and blindness to the risk of the disease in their area.
Epidemiology and public health can learn an important lesson from other fields in the biomedical sciences The rigor and reproducibility crisis in the biomedical sciences has moved scientists across different fields to establish and develop guidelines for reporting data and methods with rigor and robustness. In enzymology, there are often key measurements, reagents, temporal data, and other critical information left out leading to irreproducible studies and unreliable results. The lack of consensus within the community results in inconsistent reporting of data throughout studies. Experiments are conducted in different environments and in a variety of ways without consideration of the weight each variation carries. This has led to discrepancies in the reporting of physical constants, leading to irreproducible scientific findings. In an effort to gain control, the Standards for Reporting Enzymology Data guidelines have been created to inform enzymologists on what data is critical to report for their experiments. These guidelines ensure that the identity of the enzyme, preparation, storage conditions, assay conditions, enzyme activity, methodology and any other critical information is clearly stated in order to standardize studies in their field. Groups of experts in other fields of biology have come together in an attempt to resolve this growing issue. To encourage the reporting of critical information, these groups established guidelines such as the Minimum Information About Microarray Experiment (MIAME) and Minimum Information about a Biomedical or Biological Investigation (MIBBI). According to the metrologists in Plant et al. , establishing consensus requirements such as these is the first step to bringing back validity and reproducibility to published results in their respective scientific fields. Plant et al. discuss three core aspects that are vital to identifying confounding variables and assessing uncertainty within a study. First, characterizing the experimental system, such as specifying instrumentation, characteristics of the subjects and computational tools, will make results robust. Second, immutable reference materials and reference data, like including calibration of instruments and type of software, will make results reproducible and comparable between laboratories. An example is the OD level used in ELISA tests or the type of specimen used in the validation of COVID-19 test quality. Third, valid interpretation of data given the known truth and limitations of the experiment.
What are the next steps for epidemiology and public health? There has been a promising development in the standardization of reporting figures, context, and terminology on the CDC website . On this page, the CDC outlines how diagnostic and screening testing sites must be accredited, how they report their data and to which organizations (regional, state, and federal public health departments), what data elements should be reported (age, race, sex, test ordered, date, etc.) and the standard terminology that should be used. However, there are many other areas where more work is required. For example, in the case of fatality rates, we suggest the reporting of seven categories of metadata (see, ) in studies estimating the seroprevalence and/or the IFR of a population. Included are topics concerning both seroepidemiology and modeling, which have the potential to cause significant uncertainties and variations in data, as we have discussed. Again, these suggestions should be considered as a starting point for experts in the field to ensure a complete picture of how each COVID-19 epidemiological study is painted. While this table can be used by epidemiologists in their studies, the Resolve to Save Lives study similarly includes a table of 15 essential COVID-19 indicators that should be reported by each county, state, and country and example data dashboards that can be used more generally by serologists, policy makers and government officials. We also recommend looking at Table 1 of another study by Plant et al. that provides general guidelines to kick start the conversation of identifying any other uncertainties within serology studies that have yet to be identified. This table was created by summarizing the sources of uncertainty as described by the Guide to Expression of Uncertainty in Measurement. The next critical step is setting in motion a discussion within the epidemiological field to standardize the measuring and reporting of data. To do this effectively, we suggest an international committee of epidemiological experts to come together and establish minimum reporting guidelines in epidemiology and public health. This group could be coordinated by both the CDC and the National Institute of Standards and Technology (NIST), which are in the position of guiding the initiative effectively. The previously mentioned page on the CDC website does well to consider our concerns as they relate to lab-reported data, however, these guidelines could also be extended to all serology studies. It will be very beneficial to establish an international committee analogous to or within The Bureau International des Poids et Mesures. This is an international body that aggregates all state members of NIST and other countries around the world to help more broadly establish what needs to be standardized within certain fields of science and what fundamental definitions of quantities people should adopt. Without an international committee and encouragement by higher institutions, it will be difficult, if not impossible, to establish global guidelines to be prepared for the next pandemic. In addition to the epidemiology field, the expectation to standardize methods of reporting COVID-19 related data will hopefully be implemented in all government health agencies across the United States, as this is the most direct way to improve the quality of data presented to the public and policymakers. While much of this data may not be immediately available in all states, instituting a set of indicators to be reported, such as those in the Resolve to Save Lives study , will begin this critical process. The benefits of investing resources into properly gathering this data will certainly outweigh the costs to our economy, social lives, and public health. While we have used the current pandemic as our case for standardizing the methods of data collection and reporting, we hope that the concepts presented in this paper will become well established in the epidemiological communities. This issue is easily overlooked and is more prevalent than one might think. Fixing the problem for the current and future pandemics begins with increasing the awareness of how one’s research works with the research of others in the same field. Establishing this perspective will serve to reveal the many connections between the extensive amount of research published on any singular topic and improve our ability to utilize each and every finding. Our goal is to add this perspective on experiment design and data reporting to the arsenal of the epidemiological scientist. We believe that doing so will help further develop an already robust field and enhance the real-time impact of epidemiological research on public health.
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
|
Semantic analysis of SNOMED CT for a post-coordinated database of histopathology findings | 0fe2d95d-8e60-4ae9-8acd-fb6ffeb9f067 | 4147616 | Pathology[mh] | SNOMED CT, originally developed by the College of American Pathologists (CAP) and now the product of the International Health Terminology Standards Development Organization (IHTSDO), is the international lingua franca for encoding clinical findings within the electronic health record (EHR) and has been adopted for use in the USA, Canada, UK, Australia, and many other nations. To satisfy Meaningful Use requirements, the Office of the National Coordinator (ONC) requires SNOMED CT to be used to encode problem lists and selected findings within the EHR. This requirement includes encoding of clinical findings for communication of patient health summaries between healthcare entities. The CAP's Cancer Checklists also incorporate pre-coordinated SNOMED CT for many required reporting elements to document standardized cancer reporting at a summative level. The continued development of SNOMED CT expressions that accurately represent the clinician's intended meaning is important for patient care, transitions of care, and patient outcome research. Surgical pathology practice entails the examination of histologically prepared glass slides by a pathologist, and development of a report that is the summary of the pathologist's findings. The contents of the diagnostic report consist of the pathologist's interpretation of the images and the clinical information provided beforehand. Specific diagnostic features noted in the microscopic exam may be referred to in the final report, but explicit details of tissue architectures and diagnostic morphologic changes are not exhaustively represented in the diagnostic report. SNOMED CT encoding of detailed microscopic findings is a process that is untested as most surgical pathology databases today are created by natural language processing of final dictated reports. The explicit and detailed findings of the diagnosing pathologist may not be easily converted into computable terms for reporting purposes, patient information communication, or research. Microscopic findings and tissue morphometries recorded in computable form can be used to enhance the surgical pathology report and could be incorporated into clinical decision support systems for histopathology-based diagnoses. In conjunction with digital images, encoded microscopic findings can enrich medical training programs with detailed, annotated pathology image repositories which could also serve as a resource to facilitate computer aided diagnostic devices. Ultimately, an accurately indexed library of microscopic findings is the foundation for achieving the promise of ‘Big Data’ wherein genomic associations with abnormal tissue morphologies can be identified. SNOMED CT expressions for clinical findings may sometimes be represented by a pre-coordinated definition. Pre-coordinated SNOMED CT definitions are published by the IHTSDO and consist of a unique SNOMED CT concept identifier linked to a set of relationships that bind the term to the concepts within the SNOMED CT concept hierarchy that capture the unambiguous semantic meaning of the finding. Expressions which cannot be fully defined within the SNOMED CT model are identified as semantically incomplete and flagged as ‘primitive’. When pre-coordinated content is not fit for purpose, IHTSDO provides guidance to properly construct post-coordinated SNOMED CT expressions wherein syntactically normalized SNOMED CT expressions are developed at the time of recording of the finding. SNOMED CT concepts and relationships may be combined in such a way to unambiguously represent a clinical finding following the SNOMED CT concept model as explained in the SNOMED CT User Guide, and more recently the SNOMED CT Starter Guide, to expand the expressivity of SNOMED CT. The objective of this research was to investigate SNOMED CT as an expressive terminology to describe detailed histopathologic findings in order to capture surgical pathology findings in a discrete, explicit, and interoperable fashion. It was hypothesized that microscopic histopathologic findings could be accurately represented using SNOMED CT . The research further sought to elucidate attributes, values, and syntax of SNOMED CT that might be required to enhance the expressivity of SNOMED CT for histopathologic findings in surgical pathology.
Twenty-four breast biopsy cases (13 malignant and 11 non-malignant diagnoses) were selected for review from cases previously signed out as part of the University of Nebraska Medical Center Department of Pathology and Microbiology breast pathology service. Efforts were made to include cases that demonstrated a variety of diagnostic features as noted in the final diagnoses reported in the laboratory information system (LIS). The surgical pathologists reviewed digital whole slide images (WSI) of histologically prepared glass slides of the selected cases to identify tissue architectural features of diagnostic importance and tissue morphologies contributing to the final diagnosis. Histopathologic features supporting the final diagnoses were marked up using WSI viewing software tools. Each marked up feature was annotated by the individual pathologist in their own words, thereby creating a series of stated assessments. The diagnostic comments and statements contained in the image annotations and the final, sign-out report as recorded in the LIS were categorized and reduced to 95 lexically distinct statements . After each case was reviewed, marked up, and annotated, the authors (WSC, JRC) analyzed the meaning of the clinical statements based on the underlying semantics intended by the pathologists and identified pre-coordinated SNOMED CT expressions or developed post-coordinated SNOMED CT expressions to accurately and comprehensively represent each microscopic finding. The SNOMED CT concept model as defined in the 2012 SNOMED CT Editorial Guide and the 2012 SNOMED CT Technical Guide was strictly observed. SNOMED CT July 2012 international release was the reference terminology release. The Cliniclue Xplore SNOMED CT browser utility (The Clinical Information Consultancy Ltd, UK, 2011) was used to perform word searches to identify possible concepts to include in the definitional expression of each histological finding. On development of post-coordinated SNOMED CT expressions for each histological feature, the expressions were reviewed by the pathologists who made the statements to ensure the integrity of the expressions between the stated definition and the intended meanings. Changes to the post-coordinated expression were made as necessary to ensure that the SNOMED CT expression accurately defined the intent of the pathologist's stated assessment and remained consistent with the SNOMED CT concept model as specified in the Technical Guide. In particular, the SNOMED CT concept model consists of a limited number of top level concept hierarchies, including |clinical finding|. Each concept within a hierarchy is defined by a series of attribute–value pairs. Attributes represent definitional aspects of the concept (eg, a clinical finding is defined by attributes such as |finding site|, |abnormal morphology|, and |finding method|). Each attribute is paired with a concept value that the attribute asserts (eg, to assert that the finding is in the breast, then |finding site|=|structure of breast|). The model specifies the definitional requirements and constraints that must be followed to properly construct a concept within the top level hierarchy (ie, allowable attribute domains and the range of allowable concept values). A senior SNOMED CT terminologist (JRC) also reviewed each expression for semantics and adherence to the SNOMED CT concept model. Cases where no SNOMED CT expression could be developed to accurately represent the intended meaning with adherence to SNOMED CT editorial rules were inventoried and classified by reason of encoding failure.
Of the 24 breast biopsy cases, 95 diagnostic or pre-diagnostic features were marked up and annotated by two pathologists (WWW, SHH) or were found to be explicitly stated in the final diagnostic summary. Sixty-nine statements represented conclusive or probabilistic diagnostic assertions, and 26 pathologist statements were descriptive in nature. Seventy-three unique post-coordinated SNOMED CT expressions were constructed from these stated definitions. The meaning of complex statements which represented conjunctions such as ‘fibrocystic changes including stromal fibrosis, apocrine metaplasia, cyst formation, and hyperplastic changes’ were captured by employing the SNOMED CT expression syntax for complex expressions . Only one finding of 74 was represented by a pre-coordinated concept, |calcification of breast (finding)|. Box 1 Stated finding of fibrocystic changes refined by specified abnormal morphologies Fibrocystic changes including stromal fibrosis, apocrine metaplasia, cyst formation, and hyperplastic changes |IS A| 404684003|clinical finding|: 363698007|finding site|=279009002|glandular structure of breast|, 116676008|associated morphology|=367647000|fibrocystic changes|, 116676008|associated morphology|=367643001|cyst|, 116676008|associated morphology|=81274009|aprocrine metaplasia|, 116676008|associated morphology|=1112674009|fibrosis|, 116676008|associated morphology|=31390008|epithelial hyperplasia|, 418775008|finding method|={104210008|hematoxylin and eosin stain method|+ 252416005|histopathology test|+104157003|light microscopy|} Representation of numbers Eleven of the 13 malignant cases involved linear measurement of a tumor extent. Attribute–value pairs for |observable entity (observable entity)| were used for recording dimensions of tumors or ductal carcinoma in situ (DCIS) involvement within the biopsy using values of |tumor size, invasive component, greatest linear dimension (observable entity)| or |linear extent of involvement of carcinoma in specimen obtained by needle biopsy (observable entity)|. A SNOMED CT standard for numerical representation is currently under ballot by IHTSDO, but to date has not been approved. Therefore, these post-coordinated expressions including dimensions could not accurately be rendered with the current concept model. As a statement of clinical finding, the SNOMED model extension under ballot expects that as an attribute–value pair of | has interpretation (attribute)|={numerical observation+units of measure}. Since this is a known limitation of the SNOMED expression syntax undergoing ballot review, the condition was only counted once in our inventory of SNOMED CT limitations. This left a total of 24 statements of the total 95 unique clinical statements (25%) that could not be adequately represented by SNOMED CT expressions. In all, valid SNOMED CT expressions were constructed for 75% of the assessment statements. Related findings Concepts subsumed by the clinical finding hierarchy are defined by a set of attributes and a range of concept values that include |finding site| with allowable values of |anatomical or acquired body structure| and its subtypes or the attribute |associated morphology| with allowable values of |morphologically abnormal structure| and its subtypes. Each attribute is paired with a defined SNOMED CT concept value (eg, 31737007|structure of small lactiferous ductules| or 31390008|epithelial hyperplasia|), thus creating a list of attribute–value pairs. The SNOMED CT concept model dictates which attributes and values may be used to define |clinical finding (finding)|. A |clinical finding (finding)| is considered ‘fully-defined’ when all necessary and sufficient permissible attribute–value pairings have been enumerated to unambiguously define the |clinical finding| concept in question. The defining attributes required for each post-coordinated expression implicit to the histologic methods employed and the specimens examined as part of this research project consisted of |finding site|, |associated morphology| and |finding method|. The |severity| qualifier was utilized when necessary to assert degrees of the |morphologically abnormal structure| attribute–value pair (eg, severe epithelial hyperplasia or mild, hyalinized fibrosis. Breast cancer diagnostic statements often asserted the presence of cancer (eg, DCIS) and a histologic grade or Nottingham score. In the current SNOMED CT concept model, histologic grade and Nottingham scores of carcinomas are defined as pre-coordinated, primitive |clinical finding (finding)| concepts and are not included in the domain of defining attributes of |clinical finding (finding)|. However, the defining attribute, 47429007|associated with (attribute)| is in the allowed domain and can be paired with a defined clinical finding concept to assert the presence of a related clinical finding. Therefore, the defining attribute, |associated with (attribute)|, was paired with a clinical finding concept value of the appropriate histologic grade or Nottingham score such as, |nottingham combined grade I: 3–5 points|, to assert a cancer diagnosis with a histologic grade or Nottingham score. To assert concomitant conditions that must be enumerated in synoptic reports, such as DCIS found in the presence of invasive ductal carcinoma, the abnormal morphology concepts were listed individually as shown in . The microscopic |anatomical or acquired body structure (body structure)| values in this study were limited to six specific SNOMED CT |anatomical or acquired body structure (body structure)| concept codes pertaining to the glandular structure of the breast, three codes pertaining to non-glandular connective tissue, and one code generalizing breast structure. Box 2 Formalism for invasive ductal carcinoma with associated DCIS Invasive ductal carcinoma with associated DCIS |IS A| 404684003|clinical finding|: 363698007|finding site|=279009002|glandular structure of breast|, 116676008|associated morphology|=82711006|infiltrating duct carcinoma|, 47429007|associated with|=404684003|clinical finding|: (363698007|finding site|=64633006|lactiferous duct structure|, 116676008|associated morphology=86616005|intraductal carcinoma, non- infiltrating, no ICD-0 subtype|), 418775008|finding method|={104210008|hematoxylin and eosin stain method|+ 252416005|histopathology test|+104157003|light microscopy|} Missing SNOMED CT concepts A defined SNOMED CT concept for the lumen of the breast duct or ductule did not exist in the July 2012 release but was pre-coordinated in July 2013 (64633006|structure of lumen of lactiferous duct (body structure)|. The lumen of the breast duct was used in one finding in the 24 cases analyzed as part of this study. To assert the proper concept from the procedure hierarchy for the findings noted by light microscopy, histopathology, and hematoxylin and eosin stain, three procedure concept values were combined. Namely, concept values for |light microscopy|, |histopathology test| and |hematoxylin and eosin stain method| were combined to assert that the clinical finding was made by the light microscopy examination of a histology specimen prepared with hematoxylin and eosin stain. For findings noted on slides prepared with immunohistochemistry (IHC) stains, the procedure value for hematoxylin and eosin stain method| was replaced with the value |IHC procedure. However, no SNOMED CT concept codes for the specific IHC | procedures required for differential diagnoses were defined (ie, the p63 stain method, AE1/AE3 (pan keratin) or e-cadherin). As such, the findings made by these procedures could not be defined completely, but rather remained generalized to an IHC procedure. This resulted in three findings where SNOMED CT expressions could not be sufficiently defined within the constraints of the 2012 international release. All clinical findings in the 24 breast biopsy cases analyzed were defined by 44 abnormal morphology values that were paired with the defining attribute, |morphologically abnormal structure|. In six stated definitions, two or more concepts were joined to assert the co-occurrence of two or more abnormal morphologies observed in the same |anatomical or acquired body structure (body structure)| and whose co-occurrence signified a unique, singular finding and not a simple co-occurrence of unrelated, distinct abnormal morphologies. For example, the clinical statement ‘epithelial hyperplasia with atypia’ required the binding of the concepts |epithelial hyperplasia| and |atypia suspicious for malignancy| to create a new concept that asserts that epithelial hyperplasia with atypia |IS A| epithelial hyperplasia and |IS A| atypia suspicious for malignancy . Box 3 Two associated morphology concepts joined (bolded) to signify a singular, abnormal morphology Epithelial hyperplasia with atypia |IS A| 404684003|clinical finding|: 363698007|finding site|=31737007|structure of small lactiferous ducts|, 116676008|associated morphology|= (31390008|epithelial hyperplasia| + 44085002|atypia suspicious for malignancy) , 418775008|finding method|={104210008|hematoxylin and eosin stain method|+ 252416005|histopathology test|+104157003|light microscopy|} Uncertainty and significant negatives Within the SNOMED CT concept model, clinical statements that assert conditions of probability or the specific absence of a finding require the use of |situation with explicit context|. demonstrates the use of |situation with explicit context| to express the verbal statement of ‘no malignancy of breast’. The conjugation of |situation with explicit context (situation)| was required when conditions of probability and/or absence of a finding occurred in combination with the positive presence of another finding. For example, the stated finding of ‘focal hyperplasia without atypia’ entailed the creation of a conjunction of 243796009|Situation with explicit context (situation)| using the post-coordinated value of epithelial hyperplasia with the 404684003|Clinical finding (clinical finding)| attribute along with the |finding context| of |known present|. This |situation with explicit context (situation)| was grouped with the |situation with explicit context (situation)| consisting of the |clinical finding (clinical finding)| attribute with the post-coordinated value for epithelial cell atypia and the finding context of |known absent| . Other examples of exclusionary findings included hyperplasia without atypia and cystically dilated ductules without atypia. Therefore, if the scope of implementation of a surgical pathology database is to include statements of probability or clinical absence, explicit context must be modeled for all findings. Box 4 Formalism for no malignancy of breast using |situation with explicit context| No malignancy of breast |IS A| 243796009|situation with explicit context|: {408729009|finding context|=410516002|known absent|, 246090004|associated finding|=404684003|clinical finding|= (363698007|finding site|=76752008|breast structure|, 116676008|associated morphology|=86049000|malignant neoplasm, primary| 418775008|finding method|={104210008|hematoxylin and eosin stain method|+ 252416005|histopathology test|+104157003|light microscopy|}), 410510008|temporal context value|=410585006|current – unspecified|, 408732007|subject relationship context|=410604004|subject of record|} Box 5 Expression joining two situations with explicit context to represent focal epithelial hyperplasia without atypia Focal hyperplasia without atypia |IS A| 243796009|situation with explicit context|: {408729009|finding context|=410515003|known present|, 246090004|associated finding|=404684003|clinical finding|= (363698007|finding site|=76752008|breast structure|, 116676008|associated morphology|=36949004|focal epithelial hyperplasia| 418775008|finding method|={104210008|hematoxylin and eosin stain method|+ 252416005|histopathology test|+104157003|light microscopy|}), 410510008|temporal context value|=410585006|current – unspecified|, 408732007|subject relationship context|=410604004|subject of record|} + 243796009|situation with explicit context|: {408729009|finding context|=410516002|known absent|, 246090004|associated finding|=404684003|clinical finding|= (363698007|finding site|=4212006|epithelial cell|, 116676008|associated morphology|=44085002|atypia suspicious for malignancy|, 418775008|finding method|={104210008|hematoxylin and eosin stain method|+ 252416005|histopathology test|+104157003|light microscopy|}), 410510008|temporal context value|=410585006|current – unspecified|, 408732007|subject relationship context|=410604004|subject of record|} Absence of cellular architecture SNOMED CT concepts describing morphologic features as described by the pathologists and included in the stated assessment of the observation which included descriptions for cellular formations were not present in allowable concept hierarchies for clinical findings and/or no concept definition was present within any concept hierarchy to describe the observed tissue morphometry. Therefore, valid post-coordinated SNOMED CT expressions could not be created for stated assessments such as ‘nests and irregular cords of pleomorphic epithelial cells’ or ‘dense hyalinized connective tissue’. This condition prevented the creation of post-coordinated SNOMED CT expressions for 20 findings (or 21% of the stated expression) in this dataset (see ). To express the pathologists’ statements of degree of morphology observed within the histologically prepared slide, the 272141005|Severities (qualifier value)| attribute was used.
Eleven of the 13 malignant cases involved linear measurement of a tumor extent. Attribute–value pairs for |observable entity (observable entity)| were used for recording dimensions of tumors or ductal carcinoma in situ (DCIS) involvement within the biopsy using values of |tumor size, invasive component, greatest linear dimension (observable entity)| or |linear extent of involvement of carcinoma in specimen obtained by needle biopsy (observable entity)|. A SNOMED CT standard for numerical representation is currently under ballot by IHTSDO, but to date has not been approved. Therefore, these post-coordinated expressions including dimensions could not accurately be rendered with the current concept model. As a statement of clinical finding, the SNOMED model extension under ballot expects that as an attribute–value pair of | has interpretation (attribute)|={numerical observation+units of measure}. Since this is a known limitation of the SNOMED expression syntax undergoing ballot review, the condition was only counted once in our inventory of SNOMED CT limitations. This left a total of 24 statements of the total 95 unique clinical statements (25%) that could not be adequately represented by SNOMED CT expressions. In all, valid SNOMED CT expressions were constructed for 75% of the assessment statements.
Concepts subsumed by the clinical finding hierarchy are defined by a set of attributes and a range of concept values that include |finding site| with allowable values of |anatomical or acquired body structure| and its subtypes or the attribute |associated morphology| with allowable values of |morphologically abnormal structure| and its subtypes. Each attribute is paired with a defined SNOMED CT concept value (eg, 31737007|structure of small lactiferous ductules| or 31390008|epithelial hyperplasia|), thus creating a list of attribute–value pairs. The SNOMED CT concept model dictates which attributes and values may be used to define |clinical finding (finding)|. A |clinical finding (finding)| is considered ‘fully-defined’ when all necessary and sufficient permissible attribute–value pairings have been enumerated to unambiguously define the |clinical finding| concept in question. The defining attributes required for each post-coordinated expression implicit to the histologic methods employed and the specimens examined as part of this research project consisted of |finding site|, |associated morphology| and |finding method|. The |severity| qualifier was utilized when necessary to assert degrees of the |morphologically abnormal structure| attribute–value pair (eg, severe epithelial hyperplasia or mild, hyalinized fibrosis. Breast cancer diagnostic statements often asserted the presence of cancer (eg, DCIS) and a histologic grade or Nottingham score. In the current SNOMED CT concept model, histologic grade and Nottingham scores of carcinomas are defined as pre-coordinated, primitive |clinical finding (finding)| concepts and are not included in the domain of defining attributes of |clinical finding (finding)|. However, the defining attribute, 47429007|associated with (attribute)| is in the allowed domain and can be paired with a defined clinical finding concept to assert the presence of a related clinical finding. Therefore, the defining attribute, |associated with (attribute)|, was paired with a clinical finding concept value of the appropriate histologic grade or Nottingham score such as, |nottingham combined grade I: 3–5 points|, to assert a cancer diagnosis with a histologic grade or Nottingham score. To assert concomitant conditions that must be enumerated in synoptic reports, such as DCIS found in the presence of invasive ductal carcinoma, the abnormal morphology concepts were listed individually as shown in . The microscopic |anatomical or acquired body structure (body structure)| values in this study were limited to six specific SNOMED CT |anatomical or acquired body structure (body structure)| concept codes pertaining to the glandular structure of the breast, three codes pertaining to non-glandular connective tissue, and one code generalizing breast structure. Box 2 Formalism for invasive ductal carcinoma with associated DCIS Invasive ductal carcinoma with associated DCIS |IS A| 404684003|clinical finding|: 363698007|finding site|=279009002|glandular structure of breast|, 116676008|associated morphology|=82711006|infiltrating duct carcinoma|, 47429007|associated with|=404684003|clinical finding|: (363698007|finding site|=64633006|lactiferous duct structure|, 116676008|associated morphology=86616005|intraductal carcinoma, non- infiltrating, no ICD-0 subtype|), 418775008|finding method|={104210008|hematoxylin and eosin stain method|+ 252416005|histopathology test|+104157003|light microscopy|}
A defined SNOMED CT concept for the lumen of the breast duct or ductule did not exist in the July 2012 release but was pre-coordinated in July 2013 (64633006|structure of lumen of lactiferous duct (body structure)|. The lumen of the breast duct was used in one finding in the 24 cases analyzed as part of this study. To assert the proper concept from the procedure hierarchy for the findings noted by light microscopy, histopathology, and hematoxylin and eosin stain, three procedure concept values were combined. Namely, concept values for |light microscopy|, |histopathology test| and |hematoxylin and eosin stain method| were combined to assert that the clinical finding was made by the light microscopy examination of a histology specimen prepared with hematoxylin and eosin stain. For findings noted on slides prepared with immunohistochemistry (IHC) stains, the procedure value for hematoxylin and eosin stain method| was replaced with the value |IHC procedure. However, no SNOMED CT concept codes for the specific IHC | procedures required for differential diagnoses were defined (ie, the p63 stain method, AE1/AE3 (pan keratin) or e-cadherin). As such, the findings made by these procedures could not be defined completely, but rather remained generalized to an IHC procedure. This resulted in three findings where SNOMED CT expressions could not be sufficiently defined within the constraints of the 2012 international release. All clinical findings in the 24 breast biopsy cases analyzed were defined by 44 abnormal morphology values that were paired with the defining attribute, |morphologically abnormal structure|. In six stated definitions, two or more concepts were joined to assert the co-occurrence of two or more abnormal morphologies observed in the same |anatomical or acquired body structure (body structure)| and whose co-occurrence signified a unique, singular finding and not a simple co-occurrence of unrelated, distinct abnormal morphologies. For example, the clinical statement ‘epithelial hyperplasia with atypia’ required the binding of the concepts |epithelial hyperplasia| and |atypia suspicious for malignancy| to create a new concept that asserts that epithelial hyperplasia with atypia |IS A| epithelial hyperplasia and |IS A| atypia suspicious for malignancy . Box 3 Two associated morphology concepts joined (bolded) to signify a singular, abnormal morphology Epithelial hyperplasia with atypia |IS A| 404684003|clinical finding|: 363698007|finding site|=31737007|structure of small lactiferous ducts|, 116676008|associated morphology|= (31390008|epithelial hyperplasia| + 44085002|atypia suspicious for malignancy) , 418775008|finding method|={104210008|hematoxylin and eosin stain method|+ 252416005|histopathology test|+104157003|light microscopy|}
Within the SNOMED CT concept model, clinical statements that assert conditions of probability or the specific absence of a finding require the use of |situation with explicit context|. demonstrates the use of |situation with explicit context| to express the verbal statement of ‘no malignancy of breast’. The conjugation of |situation with explicit context (situation)| was required when conditions of probability and/or absence of a finding occurred in combination with the positive presence of another finding. For example, the stated finding of ‘focal hyperplasia without atypia’ entailed the creation of a conjunction of 243796009|Situation with explicit context (situation)| using the post-coordinated value of epithelial hyperplasia with the 404684003|Clinical finding (clinical finding)| attribute along with the |finding context| of |known present|. This |situation with explicit context (situation)| was grouped with the |situation with explicit context (situation)| consisting of the |clinical finding (clinical finding)| attribute with the post-coordinated value for epithelial cell atypia and the finding context of |known absent| . Other examples of exclusionary findings included hyperplasia without atypia and cystically dilated ductules without atypia. Therefore, if the scope of implementation of a surgical pathology database is to include statements of probability or clinical absence, explicit context must be modeled for all findings. Box 4 Formalism for no malignancy of breast using |situation with explicit context| No malignancy of breast |IS A| 243796009|situation with explicit context|: {408729009|finding context|=410516002|known absent|, 246090004|associated finding|=404684003|clinical finding|= (363698007|finding site|=76752008|breast structure|, 116676008|associated morphology|=86049000|malignant neoplasm, primary| 418775008|finding method|={104210008|hematoxylin and eosin stain method|+ 252416005|histopathology test|+104157003|light microscopy|}), 410510008|temporal context value|=410585006|current – unspecified|, 408732007|subject relationship context|=410604004|subject of record|} Box 5 Expression joining two situations with explicit context to represent focal epithelial hyperplasia without atypia Focal hyperplasia without atypia |IS A| 243796009|situation with explicit context|: {408729009|finding context|=410515003|known present|, 246090004|associated finding|=404684003|clinical finding|= (363698007|finding site|=76752008|breast structure|, 116676008|associated morphology|=36949004|focal epithelial hyperplasia| 418775008|finding method|={104210008|hematoxylin and eosin stain method|+ 252416005|histopathology test|+104157003|light microscopy|}), 410510008|temporal context value|=410585006|current – unspecified|, 408732007|subject relationship context|=410604004|subject of record|} + 243796009|situation with explicit context|: {408729009|finding context|=410516002|known absent|, 246090004|associated finding|=404684003|clinical finding|= (363698007|finding site|=4212006|epithelial cell|, 116676008|associated morphology|=44085002|atypia suspicious for malignancy|, 418775008|finding method|={104210008|hematoxylin and eosin stain method|+ 252416005|histopathology test|+104157003|light microscopy|}), 410510008|temporal context value|=410585006|current – unspecified|, 408732007|subject relationship context|=410604004|subject of record|}
SNOMED CT concepts describing morphologic features as described by the pathologists and included in the stated assessment of the observation which included descriptions for cellular formations were not present in allowable concept hierarchies for clinical findings and/or no concept definition was present within any concept hierarchy to describe the observed tissue morphometry. Therefore, valid post-coordinated SNOMED CT expressions could not be created for stated assessments such as ‘nests and irregular cords of pleomorphic epithelial cells’ or ‘dense hyalinized connective tissue’. This condition prevented the creation of post-coordinated SNOMED CT expressions for 20 findings (or 21% of the stated expression) in this dataset (see ). To express the pathologists’ statements of degree of morphology observed within the histologically prepared slide, the 272141005|Severities (qualifier value)| attribute was used.
The utility of SNOMED CT was evaluated as a means to represent the microscopic findings as stated by surgical pathologists in 24 breast biopsies. Sixty-nine of the 95 (75%) listed clinical assessments could be accurately and comprehensively encoded with existing SNOMED CT content and the current concept model. The remaining 26 (25%) clinical statements could not be adequately represented using the July 2012 international release of SNOMED CT. The areas in which SNOMED CT lacked adequate expressivity could be categorized into two groups. The first group was represented by assessments for which no defined SNOMED CT concepts existed in the July 2012 SNOMED CT release. The second group could not be represented with SNOMED CT expressions because of constraints in the current SNOMED CT concept model. The absence of defined SNOMED CT concepts encountered in this research was primarily limited to specific IHC stains. The concept definition 117617002| IHC procedure (procedure)| exists but does not provide the specificity to describe 404684003|Clinical finding (clinical finding)| by the unique IHC procedure used by the pathologist. Enumeration of SNOMED CT concept definitions for specific IHC stains would be required to achieve the expressivity required by anatomic pathologists in their current daily practice. The single limitation based on |Anatomical or acquired body structure (body structure)| encountered in this study was the absence of a defined concept for the lumen of the breast duct. This deficiency has been corrected in the 2013 SNOMED CT release and no longer presents a definitional issue for purposes of the findings of this research project. The SNOMED CT expression of clinical statements that used descriptive language was difficult, and often not possible to achieve in this study. Twenty clinical statements could not be encoded using post-coordinated SNOMED CT expressions because no concept codes existed in the July 2012 international release that asserted the proper meaning of the stated clinical definition. For example, the diagnostic expression ‘nests and cords of pleomorphic epithelial cells’ could not be formed into a SNOMED CT expression because pattern or shape concepts were not defined for ‘nests’ or ‘cords’. Concepts for some cellular or tissue formations do exist within the SNOMED CT concept model, but they are found in the qualifier value/formations/descriptors hierarchy. The SNOMED CT concept model will have to support additional concepts for tissue morphometries and architectural features within the associated morphology hierarchy if this meaning is to be properly recorded. Definitive or conclusive abnormal morphology statements could be represented by defined SNOMED CT concepts. However, descriptive statements consistent with the conclusive abnormal morphology statement could not be represented using SNOMED terminology. For example, duct ectasia is a defined SNOMED CT concept, 110420004|duct ectasia|. However, the architectural features that describe duct ectasia, that is, ‘simple epithelium overlying dense fibrous connective tissue forming large cystic structures’ cannot be represented using SNOMED CT. SNOMED CT permits synonym descriptions for defined concepts which can accommodate descriptive utterances, but development of concept definitions for the basic tissue architectural features may be a better approach for use in histopathology. The practice of surgical pathology is largely that of pattern recognition by the pathologist of tissue specimens viewed by light microscopy with a given clinical context. The use of descriptive language concerning architectural features, shapes, and patterns of tissue formations is an important part of reaching differential diagnoses and in training pathology residents. Descriptive statements of tissue architecture within the SNOMED CT concept model present a challenge for the use of SNOMED CT findings expressions at the microscopic level. Restricting SNOMED CT expressions to definitive, conclusive abnormal morphology concepts without providing a descriptive layer of permissible concepts places artificial limitations on characterizing observed tissue morphometries. The development of a concept hierarchy of architectural concepts to be used within the CLINICAL FINDING hierarchy should be investigated. It should be determined which architectural concepts are definitional and used to distinguish between diagnostic conditions/disorders and those architectural concepts that serve as qualifiers of definitional concepts. Application of both uses of architectural concepts can be found in the anatomic pathology diagnostic practice. This differentiation of architectural concepts is important and will affect their representation within the overall SNOMED CT concept model. The current release of SNOMED CT, the Technical Users Guide and the Editorial Guide do not adequately address or define how to properly express certain clinical statements important to defining surgical pathology microscopic findings. In 11 of the 12 cancer cases reviewed, diagnostic statements concerning the greatest linear extent of invasive carcinoma and the linear extent of involvement of carcinoma in needle biopsy were explicitly stated. A valid SNOMED CT expression could be constructed to assert the clinical statement with the exception of the numerical value of the measurement. Therefore, the linear measurement of the extent of carcinomas could not be recorded. This issue has been noted by IHTSDO and is currently under ballot for incorporation into the SNOMED CT concept model. Clinical expressions containing the positive presence of a morphologic abnormality and a pertinent absence of another morphologic abnormality required the conjugation of two situations with explicit context. As previously discussed, the statement ‘usual hyperplasia’ was represented in SNOMED CT using a conjunction of situations with explicit context. One situation states the absence of atypia suspicious for malignancy, and the other situation explicitly lists the default situational context of a clinical finding, that is, the attribute–value pairings of 410510008|temporal context value|=410585006|current – unspecified| and 408732007|subject relationship context|=410604004|subject of record|. It would seem to be an unnecessary burden to require a SNOMED CT expression database for surgical pathology to be maintained as |situation with explicit context (situation)| for all instances of clinical findings. However, in correspondence with the IHTSDO head terminologist, description logic constraints dictate that a clinical finding cannot be conjoined with a situation with explicit context, nor can the description logic classifier compute equivalence between a situation with explicit context and clinical finding. Situations with explicit context expressions can only be conjoined with other situations with explicit context expressions. The underlying logic constraint prohibits the conjoining of two concepts from different top-level hierarchies. The soft context (ie, default context) of the clinical finding hierarchy is that the finding is present, the subject is the patient, and the temporal context is current. However, |situation with explicit context| is a top-level hierarchy. Therefore, a clinical finding and a situation cannot be combined to create a single concept expression. A clinical finding, however, can be expressed in the situation hierarchy by explicitly stating the soft context, in which case it can be conjoined with another situation. This guideline is not described in the published SNOMED CT documentation and is likely a little known fact to SNOMED CT users. Post-coordinated databases which employ any uncertainty or statements of clinical absence must therefore include the additional attribute–value data for ALL clinical findings if they are to be supported by description logic query engines. Using the terminology to assert presence, absence, negation, or temporal context invites robust debate concerning the proper role of a terminology model and that of an information model. On one extreme, terminologies specialize in the definition of concepts and the relationships between them. At the other end of the modeling spectrum, information models specialize in the management of definitions which often includes temporal and existential information. Between the two extremes, either modeling method can be employed. The level of success realized by either model is dependent on the particular use case and the binding of the terminology model and information model for that use case. SNOMED CT seeks to represent clinical concepts. Clinical concepts entail asserting the existence (or level of absence) of and the temporal context of clinical findings. IHTSDO includes a mechanism to represent this type of information within the terminology. The scope of this study was to review the ability of SNOMED CT as a terminology, in its current state, to represent histopathology findings and not evaluate the merits of alternative approaches. The construction of SNOMED CT expressions describing the histologic grade of an identified tumor was technically possible according to grammatical guidelines. This was done by pairing the associated morphology defining attribute with the abnormal morphology value of |intraductal carcinoma, noninfiltrating, no ICD-0 subtype| and using the defining attribute, |associated with|, and the clinical finding concept of |DCIS nuclear pleomorphism, grade 1: monotonous nuclei, 1.5–2.0 red blood cells diameters, with finely dispersed chromatin and only occasional nucleoli (finding)|. Using each attribute–value pair in a single, post-coordinated expression is allowed by the current clinical finding guidelines, but it is unclear if expressing nuclear grade as a finding concept is appropriate. Nuclear grade may better be represented as an observable entity concept. Histologic grade and Nottingham score represent measurement concepts, albeit with an amount of subjectivity, and therefore, would be better expressed as other measurement concepts. This is a definitional problem within the SNOMED CT release and has been communicated with IHTSDO for resolution. Alternatively, each clinical finding could be expressed independently, that is, DCIS as one finding and histologic grade 1 as another, separate finding. This approach is syntactically straightforward but subject to ambiguous interpretation. DCIS is a clinical finding whose meaning is well understood logically and by the clinician. Histologic grade, however, represents a clinical finding that is meaningless without an associated abnormal morphology subject to grading (eg, DCIS). Therefore, the representation of each clinical finding independently of the other is not useful. Employing the defining attributes for 404684003|Clinical finding (clinical finding)| consistent with the SNOMED CT model definitions is demonstrated in the case of invasive carcinoma. To assert that the invasive ductal carcinoma has associated DCIS (as reported in CAP's cancer checklist), the post-coordinated expression for DCIS can be paired with the 47429007|associated with| attribute in a complex expression to assert the meaning of ‘invasive carcinoma with associated DCIS’ . Similar expression construction can be employed to express other Cancer Checklist data elements such as associated necrosis, microcalcifications, and lobular carcinoma in situ.
Despite the limitations discussed with using SNOMED CT expressions as a vehicle to describe diagnostic features noted by microscopic examination and whole slide imaging, the SNOMED CT international release in July 2012 was adequate to express pathologist interpretations of architectural findings for 75% of the stated statements listed in this study. Defined SNOMED CT concepts existed for each definitively described abnormal morphology in this set of breast biopsies. Furthermore, defined 442083009|Anatomical or acquired body structure (body structure)| attribute–value pairs could be identified or constructed using the current SNOMED CT conceptual content. SNOMED CT in the July 2012 release did not allow for sufficient and specific expressions for descriptive statements of histologic findings described by the examining pathologists. This deficiency presents an issue to knowledge capture and representation for microscopic, anatomic pathology assessments. Descriptive, tissue architecture information recorded by the pathologist represents the thought process of the diagnostician, and is therefore important to represent. The diagnostic thought process in combination with the resultant, definitive diagnoses represents the expression of knowledge of the physician. Conclusive information by itself is valuable for categorization of findings (assuming the conclusions are correct), and descriptive findings alone provide data that may or may not have meaning. Both elements of information are necessary to represent knowledge. Continued development of the SNOMED CT concept model and conceptual content that pertains to microscopic examination of histologically prepared tissue specimens is required in order for the terminology to be effective in surgical pathology knowledge capture and knowledge management. A possible approach is to create a specialization of the concept 399984000|abnormal shape| within the |morphologically abnormal structure| hierarchy. Tissue and cellular formation, pattern, and other architectural descriptors can be subsumed within this new concept hierarchy. This approach can be developed within a local extension of SNOMED CT with a subsequent submission to IHTSDO for balloting consideration. Detailed tagging of microscopic pathologic findings with a controlled terminology such as SNOMED CT is necessary to link tissue morphometrics with diagnostic conclusions. Encoded histopathology descriptions and findings support the aggregation and reuse of image findings that can be used for training residents, creating automated diagnostic systems, and conducting translational research using histologic imagery. However, expansion of the SNOMED concept space and current concept model to accommodate descriptive language is necessary for broad adoption of the terminology in the histopathology reporting process.
|
Effect of surface treatments and bonding type on elemental composition and bond strength of dentin | 15c9064d-710d-4c40-bc9d-4e13fde0eeee | 11541589 | Dentistry[mh] | In dentistry, a strong and enduring adhesion to tooth surfaces has a crucial role in the success of a restoration. Restorations used in prosthetic treatment mostly cover dentin and are bonded to the tooth by mechanical and chemical factors. Surface roughness is an important mechanical factor that contributes to the retention of a restoration because it increases the surface area . Mechanical retention can be formed inside of a restoration or on the tooth surface. In clinical practice, the dentin surface is roughened by acid etching or laser treatments. The main disadvantage of acid etching is demineralization of the tooth structure; therefore, cavity pretreatment with lasers has been proposed as an alternative for dentin etching . In dentistry, lasers are used in conservative treatment; in addition, they are also used to roughen cavities or a tooth surface for bonding resins and prosthodontic attachment . Although many laser devices have been employed for this purpose, erbium laser devices [erbium, chromium: yttrium scandium gallium garnet (Er, Cr: YSGG) and erbium: yttrium aluminum garnet (Er: YAG)] are the most effective and safe laser systems, and frequently used in dental clinics . Erbium lasers have high absorption properties in water and hydroxyapatite; they can therefore ablate enamel and dentin with minimal adverse effects on the pulp and surrounding tissues . Studies – investigating the roughening of dentin by erbium lasers reported that after laser treatment, rough surfaces were produced, dentin tubules opened, and the mineral content of dental tissues changed. Some studies – have demonstrated that dental hard tissues can be precisely ablated with femtosecond lasers as an alternative to erbium lasers. Femtosecond lasers use ultrashort laser pulses. The main advantages of femtosecond lasers are their speed, accuracy, and ability to roughen the material surface with negligible heat loss – . In prosthetic treatments, all ceramic or adhesive fixed restorations are generally cemented to the tooth surface by a resin cement. Depending on the resin cement type, bonding agents are usually used with these cements. Bonding agents have an important role in the chemical adhesion of resin cements to the tooth surface , , and they protect the demineralized collagen structure from oral bacteria and liquids of the oral environment . Some bonding agents can elute ions from their structure, provide remineralization, and contribute to the longevity of restorations , . Studies reported in literature are limited to the investigation of the morphological content – , , mineral content , , , both morphologic and mineral content , and bond strength of laser-treated dentin – . There is still a lack of knowledge regarding the effect of different lasers (Er: YAG and femtosecond lasers) and phosphate-containing bonding types on mineral content and bond strength of dentin. Therefore, the purposes of this research were (1) to assess the mineral changes in dentin after different surface treatments (Er: YAG or femtosecond laser) and phosphate-containing bonding application and (2) to assess the bond strength of dentin to resin cement. Our hypotheses were as follows: (1) different surface treatments and/or bonding types can affect the mineral content of dentin and (2) different surface treatments and/or bonding types can affect the bond strength of dentin to resin cement.
A power analysis was performed before this study was conducted. Based on this analysis (with 80% power), each group needed to consist of a minimum of five specimens. Therefore, six specimens were prepared for each subgroup of this study. The Ethical Committee of the University of Selcuk (Konya, Turkey) approved the present study (committee number, 2016/04). Informed consent was obtained from all participants. Thirty-nine freshly extracted human molar teeth were embedded in self-cure acrylic resin (Meliodent; Heraeus Kulzer, Hanau, Germany) to 2 mm below the enamel-cement intersection line. Using a low-speed sectioning device (Isomet 1000; Buehler Ltd, Lake Bluff, Illinois, USA), approximately 1.5 mm of the tooth structure was removed under water cooling from the occlusal plane of each tooth to expose dentin surface. The teeth were randomly seperated into three groups ( n = 13), based on the surface treatment type: (1) The control group, which received no surface treatment. (2) The Er: YAG laser treatment group: An Er: YAG laser (Fidelis Plus III, Ljubljana, Slovenia) with a wavelength of 2940 nm was applied using a handpiece (R02). A laser optic fiber (0.9 mm) was placed perpendicularly to the dentin surface at 1 mm, and the laser applied to dentin area with water irrigation and air cooling for 20 s. The laser parameters were as follows: 200 mJ, 20 Hz, and 50 µs (short pulse mode). (3) The femtosecond laser treatment group: A femtosecond amplifier (Quantronix Integra C-3.5, NY, USA) was used to apply laser pulses at a wavelength of 800 nm and a pulse duration of 90 femtosecond for dentin conditioning. The other laser parameters were as follows: power, 400 mW; marking speed, 30 mm/s; skip speed, 1250 mm/s; pulse repetition rate, 1 kHz; focal length, 11 cm; and focal spot diameter, 28.06 μm. Morphologic analysis One sample from each surface-treated group was evaluated by atomic force microscope (AFM) and scanning electron microscope (SEM) both before and after surface treatments to determine the morphologic effects of surface treatments on dentin surfaces. These specimens were not used for other experiments. AFM analysis A specimen from each group was evaluated both before and after surface treatments by AFM (NT-MDT NTEGRA Solaris, Moscow, Russia) to see the roughness and morphologic effect of surface treatments on dentin surface. The dentin surface of the same tooth sample was examined before and after laser application. Digital images were taken in air using non-contact mode at a frequency of 240 kHz. Changes in vertical position provided the height of the images, registered as bright and dark regions. A 25 × 25 μm digital image was taken for each specimen and recorded with a slow scan rate (1 Hz). SEM analysis The samples used in AFM analysis were also evaluated by SEM (EVO LS10; Zeiss, Cambridge, United Kingdom) both before and after surface treatments to determine the morphologic effects of surface treatments on dentin surfaces. The SEM images were obtained at ×1500 magnification. Energy dispersive X-ray spectroscopy analysis After the surface treatment, the dentin area was marked in each tooth. Each specimen underwent analysis by energy dispersive X-ray spectroscopy (EDX) (EVO LS10; Zeiss) both before and after bonding application to evaluate the elemental compositional changes in the dentin surface. In this analysis, the weight% of calcium (Ca) and phosphorus (P), and the Ca/P ratio were detected. Each surface-treated group was separated into two subgroups, based on which two-step self-etching adhesive was applied [Clearfil SE Bond (Kuraray Noritake, Okayama, Japan) or Clearfil SE Protect (Kuraray Noritake) ( n = 6)]. The composition, pH value and manufacturers of the tested two-step self-etch adhesive systems are given in Table . The two-step self-etch adhesives were exerted according to the manufacturer’s instructions. The Clearfil SE Bond primer was exerted to dentin surface and air dried after 20 s. The bonding agent was applied to dentin surface, and then air dried and light polymerized for 10 s. The Clearfil SE Protect Bond primer was exerted to dentin surface and air dried after 20 s. This bonding agent was also applied to dentin surface, and then air dried and light polymerized for 10 s. After bonding application, dual-cure resin cement (Panavia; Kuraray Medical Inc., Okayama, Japan) was applied to dentin surfaces with a special mold (Fig. ). The diameter of the teflon mold was 3 mm and the height of the teflon mold was 3 mm. Therefore, the cements applied to the dentin was 3 mm high. The resin cement was light polymerized (Bluephase; Ivoclar Vivadent, Schaan, Liehtenstein) for 40 s on each surface of the samples for a total of 200 s. All samples were then kept in distilled water for 24 h at 37ºC. Shear bond strength test Shear bond strength test was executed in a universal testing machine (Shimadzu AGS-X; Shimadzu Corporation, Tokyo, Japan) at a crosshead speed of 0.5 mm/min. The shear force was applied at the resin cement-dentin interface. A chisel-shaped tip was used in the shear bond strength test to apply the separating force. This tip was positioned as close as possible to the bonding surface. Bond strength values were obtained according to the formula: BS = F/A, where BS is the bond strength, F is the force required to debond the cement, and A is the area of the adhesive interface. The measurement unit of megapascals is used for bond strength value. Each elemental (Ca, P, and Ca/P ratio) weight% values in the surface-treated groups, and the difference of each elemental weight% value before and after bonding application were compared by one-way analysis of variance (ANOVA) and two-way ANOVA, respectively. In addition, for each surface-treated group, a paired t -test was executed to compare the percentage by weight of each element before and after bonding application. Shear bond strength data were analyzed by two-way ANOVA. Post hoc tests were executed using Tukey’s honest significant difference test ( P= 0.05).
One sample from each surface-treated group was evaluated by atomic force microscope (AFM) and scanning electron microscope (SEM) both before and after surface treatments to determine the morphologic effects of surface treatments on dentin surfaces. These specimens were not used for other experiments.
A specimen from each group was evaluated both before and after surface treatments by AFM (NT-MDT NTEGRA Solaris, Moscow, Russia) to see the roughness and morphologic effect of surface treatments on dentin surface. The dentin surface of the same tooth sample was examined before and after laser application. Digital images were taken in air using non-contact mode at a frequency of 240 kHz. Changes in vertical position provided the height of the images, registered as bright and dark regions. A 25 × 25 μm digital image was taken for each specimen and recorded with a slow scan rate (1 Hz).
The samples used in AFM analysis were also evaluated by SEM (EVO LS10; Zeiss, Cambridge, United Kingdom) both before and after surface treatments to determine the morphologic effects of surface treatments on dentin surfaces. The SEM images were obtained at ×1500 magnification.
After the surface treatment, the dentin area was marked in each tooth. Each specimen underwent analysis by energy dispersive X-ray spectroscopy (EDX) (EVO LS10; Zeiss) both before and after bonding application to evaluate the elemental compositional changes in the dentin surface. In this analysis, the weight% of calcium (Ca) and phosphorus (P), and the Ca/P ratio were detected. Each surface-treated group was separated into two subgroups, based on which two-step self-etching adhesive was applied [Clearfil SE Bond (Kuraray Noritake, Okayama, Japan) or Clearfil SE Protect (Kuraray Noritake) ( n = 6)]. The composition, pH value and manufacturers of the tested two-step self-etch adhesive systems are given in Table . The two-step self-etch adhesives were exerted according to the manufacturer’s instructions. The Clearfil SE Bond primer was exerted to dentin surface and air dried after 20 s. The bonding agent was applied to dentin surface, and then air dried and light polymerized for 10 s. The Clearfil SE Protect Bond primer was exerted to dentin surface and air dried after 20 s. This bonding agent was also applied to dentin surface, and then air dried and light polymerized for 10 s. After bonding application, dual-cure resin cement (Panavia; Kuraray Medical Inc., Okayama, Japan) was applied to dentin surfaces with a special mold (Fig. ). The diameter of the teflon mold was 3 mm and the height of the teflon mold was 3 mm. Therefore, the cements applied to the dentin was 3 mm high. The resin cement was light polymerized (Bluephase; Ivoclar Vivadent, Schaan, Liehtenstein) for 40 s on each surface of the samples for a total of 200 s. All samples were then kept in distilled water for 24 h at 37ºC.
Shear bond strength test was executed in a universal testing machine (Shimadzu AGS-X; Shimadzu Corporation, Tokyo, Japan) at a crosshead speed of 0.5 mm/min. The shear force was applied at the resin cement-dentin interface. A chisel-shaped tip was used in the shear bond strength test to apply the separating force. This tip was positioned as close as possible to the bonding surface. Bond strength values were obtained according to the formula: BS = F/A, where BS is the bond strength, F is the force required to debond the cement, and A is the area of the adhesive interface. The measurement unit of megapascals is used for bond strength value. Each elemental (Ca, P, and Ca/P ratio) weight% values in the surface-treated groups, and the difference of each elemental weight% value before and after bonding application were compared by one-way analysis of variance (ANOVA) and two-way ANOVA, respectively. In addition, for each surface-treated group, a paired t -test was executed to compare the percentage by weight of each element before and after bonding application. Shear bond strength data were analyzed by two-way ANOVA. Post hoc tests were executed using Tukey’s honest significant difference test ( P= 0.05).
Microscope analysis AFM results The AFM images of all specimens both before and after different surface treatments are given in Fig. . Er-YAG laser group increased the roughness of the dentin surface with holes and valleys (Fig. C). On the other side, femtosecond laser roughened the dentin surface more homogenously than the Er: YAG laser treated group (Fig. C, E), but lowered the roughness of the dentin surface as compared to the control group. (Fig. A, E). SEM results The SEM images of surface treated dentin surfaces are given in Fig. . The Er: YAG (Fig. C), and femtosecond laser-treated dentin (Fig. E) showed a rough surface morphology than the control specimen (Fig. A). The dentin tubules were opened in both Er: YAG and femtosecond laser-treated dentin surface. The femtosecond laser-treated dentin showed more homogenous surface structure than Er-YAG laser treated dentin. Elemental and bond strength analyses The results of statistical analyses were given in Tables , and . Based on one-way ANOVA, the laser-treated dentin surfaces (Er: YAG and femtosecond lasers) revealed similar elemental compositions with regard to the percentage by weight of Ca, P, and the Ca/P ratio. Both of laser treatments significantly increased the Ca and Ca/P ratio content of dentin, compared to the control group (no surface treatment) ( P <0.05). In addition, the P content of dentin was higher in the Er: YAG laser-treated group than in the control group ( P =0.001) (Table ). Based on two-way ANOVA, the surface treatment and bonding type significantly affected the difference of each element’s weight% value before and after bonding application ( P <0.05). The paired t-test analysis revealed that the Ca and P content of dentin decreased after the bonding application compared to the surface-treated dentin. In laser-treated dentin groups, the application of Clearfil SE Protect led to a significant decrease in Ca, P, and Ca/P ratio (Table ). Based on bond strength analysis, neither the surface treatment nor the bonding type influenced the bond strength values ( P >0.05) (Table ).
AFM results The AFM images of all specimens both before and after different surface treatments are given in Fig. . Er-YAG laser group increased the roughness of the dentin surface with holes and valleys (Fig. C). On the other side, femtosecond laser roughened the dentin surface more homogenously than the Er: YAG laser treated group (Fig. C, E), but lowered the roughness of the dentin surface as compared to the control group. (Fig. A, E). SEM results The SEM images of surface treated dentin surfaces are given in Fig. . The Er: YAG (Fig. C), and femtosecond laser-treated dentin (Fig. E) showed a rough surface morphology than the control specimen (Fig. A). The dentin tubules were opened in both Er: YAG and femtosecond laser-treated dentin surface. The femtosecond laser-treated dentin showed more homogenous surface structure than Er-YAG laser treated dentin. Elemental and bond strength analyses The results of statistical analyses were given in Tables , and . Based on one-way ANOVA, the laser-treated dentin surfaces (Er: YAG and femtosecond lasers) revealed similar elemental compositions with regard to the percentage by weight of Ca, P, and the Ca/P ratio. Both of laser treatments significantly increased the Ca and Ca/P ratio content of dentin, compared to the control group (no surface treatment) ( P <0.05). In addition, the P content of dentin was higher in the Er: YAG laser-treated group than in the control group ( P =0.001) (Table ). Based on two-way ANOVA, the surface treatment and bonding type significantly affected the difference of each element’s weight% value before and after bonding application ( P <0.05). The paired t-test analysis revealed that the Ca and P content of dentin decreased after the bonding application compared to the surface-treated dentin. In laser-treated dentin groups, the application of Clearfil SE Protect led to a significant decrease in Ca, P, and Ca/P ratio (Table ). Based on bond strength analysis, neither the surface treatment nor the bonding type influenced the bond strength values ( P >0.05) (Table ).
The AFM images of all specimens both before and after different surface treatments are given in Fig. . Er-YAG laser group increased the roughness of the dentin surface with holes and valleys (Fig. C). On the other side, femtosecond laser roughened the dentin surface more homogenously than the Er: YAG laser treated group (Fig. C, E), but lowered the roughness of the dentin surface as compared to the control group. (Fig. A, E).
The SEM images of surface treated dentin surfaces are given in Fig. . The Er: YAG (Fig. C), and femtosecond laser-treated dentin (Fig. E) showed a rough surface morphology than the control specimen (Fig. A). The dentin tubules were opened in both Er: YAG and femtosecond laser-treated dentin surface. The femtosecond laser-treated dentin showed more homogenous surface structure than Er-YAG laser treated dentin.
The results of statistical analyses were given in Tables , and . Based on one-way ANOVA, the laser-treated dentin surfaces (Er: YAG and femtosecond lasers) revealed similar elemental compositions with regard to the percentage by weight of Ca, P, and the Ca/P ratio. Both of laser treatments significantly increased the Ca and Ca/P ratio content of dentin, compared to the control group (no surface treatment) ( P <0.05). In addition, the P content of dentin was higher in the Er: YAG laser-treated group than in the control group ( P =0.001) (Table ). Based on two-way ANOVA, the surface treatment and bonding type significantly affected the difference of each element’s weight% value before and after bonding application ( P <0.05). The paired t-test analysis revealed that the Ca and P content of dentin decreased after the bonding application compared to the surface-treated dentin. In laser-treated dentin groups, the application of Clearfil SE Protect led to a significant decrease in Ca, P, and Ca/P ratio (Table ). Based on bond strength analysis, neither the surface treatment nor the bonding type influenced the bond strength values ( P >0.05) (Table ).
For the long-term endurance of prosthetic restorations, the bond between dentin and the resin cement is important and is affected by mechanical and chemical factors. Most prosthetic restorations cover the dentin surface instead of enamel; therefore, the present study examined the impact of surface treatments and bonding type on mineral content and bond strength of dentin. In the current study, different laser treatments (Er: YAG and femtosecond laser) were applied to dentin surfaces to obtain mechanical retention because of the advantage of creating microretentive surfaces with minimal injury to the ambient tissues , . The AFM and SEM images of Er: YAG and femtosecond laser-treated dentin showed a rough surface morphology (Figs. and ), which was similar to the SEM results of previous studies , , , – . Er: YAG laser-applied dentin showed the lack of a smear layer, whereas the femtosecond laser-treated dentin showed a debris-like surface structure. The application of the femtosecond laser without water irrigation may have caused this issue. In addition, AFM and SEM images revealed that femtosecond laser treatment roughened the dentin surface more homogenously than Er: YAG laser-treated surface. The AFM images also revealed that femtosecond laser roughened the dentin surface, however lowered the roughness value of dentin surface. This might be due to the short pulse duration of femtosecond laser as compared to the Er-YAG laser. In addition to morphologic alterations, it is necessary to consider the chemical changes produced by lasers in the irradiated tissue because diversities in the quality and quantity of the smear layer could affect the bonding ability of adhesives . Several studies , , , – have investigated the chemical alterations of dentin after Er: YAG laser application. Some authors , , , reported that Er: YAG laser application did not alter the mineral composition of dentin; however, other authors , , reported that chemical alterations occurred in the organic and inorganic composition of dentin. Ji et al. investigated the effect of femtosecond laser ablation on mineral content of dentin and reported that the P content of the tooth surface did not change in the ablated area. To detect elemental changes in dentin in our study, the same dentin area of each sample was examined by EDX analysis both after the surface treatment and after the bonding application. The analysis of laser-treated (Er: YAG and femtosecond laser) dentin revealed significantly higher Ca, P and Ca/P ratio values, compared to the control group. The reason for this result may be the fact Hossain et al. also found out that during the Er: YAG laser radiation, it is likely that the elevation in temperature in the lased area caused a relative increase in Ca or P elements due to the reduction of the organic components . Apart from the extent of the researchers’ knowledge, no standard adhesive application technique has so far been decisive for clinicians for better durability of resin–dentin bonds of adhesive systems . For prosthetic restorations, bonding agents have an important role in the chemical adhesion of resin cements to the tooth surface. Except for self-adhesive resin cements, most resin cements require bonding agents for resin cementation. These bonding agents are categorized as etch-rinse adhesives or self-etch adhesives. Etch-rinse adhesives consist of an acid, primer, and bonding agent. Self-etch adhesive systems are categorized as one- or two-step self-etch adhesives. One-step self-etch adhesive systems are all-in-one adhesives, which consolidate acid etching, priming, and bonding. They are also entitled “universal” or “multi-mode adhesives”. Two-step self-etch adhesive systems include of a seperate primer and bonding agent , , . Primer treatment in either etching mode before cementation may induce chemical interaction because of functional monomers reaching the intact dentin through the smear layer or the hybrid layer . Most studies , – using Er: YAG laser treatment reported that bond strength of two-step self-etch adhesive systems on dentin was better than that of etch-rinse adhesive systems. In addition, it was reported that the hybrid layer created by a self-etching system would contain minerals because self-etching systems demineralize dentin and do not use a rinsing step , . Reincorporation of minerals into the dentin is noteworthy because the deposited minerals may repair nanometer-sized voids and may be resistant to deterioration in the mouth , . It seems that previous reports suggested that higher bond strength was achieved when a two-step self etch adhesive system was used. This could be explained by the complete dissolution of the smear layer into the adhesive. Based on the results of former studies , – , phosphate-containing 10-methacryloyloxydecyl dihydrogen phosphate (MDP)-based two-step self-etch adhesive systems (Clearfil SE Bond and Clearfil SE Protect) were used in our study to determine whether the P-containing self-etch adhesives would reincorporate P into the dentin. The current study revealed that, after the bonding application, the mineral content of dentin decreased, compared to the surface-treated dentin. This finding may be because of the acidic content (pH 2) of the self-etch adhesive systems. The acidic monomer found in self-etch adhesives demineralizes the superficial dentin surface by partially dissolving minerals around the collagen fibrils and simultaneously allowing infiltration of resin monomers , . Because both surface treatments and the bonding type affected the mineral content of dentin, the first hypothesis was accepted. In the literature, the bond strength of dentin was investigated from different perspectives. Studies on Er: YAG laser-treated dentin investigated effect of different adhesives, irradiation distance and different energies , on the bond strength of dentin. Some authors , – reported that the bond strength values of Er: YAG laser-treated dentin were higher with two-step self-etch adhesives than with total-etch systems. Shirani et al. reported no significant difference in the bond strength values between control dentin and Er: YAG laser-treated dentin after the execution of two-step self-etch adhesives. However, Ramos et al. pointed out that Er: YAG laser treatment decreased the bond strength of dentin, compared to the control group, when used with two-step self-etch adhesives. Moreover, studies focusing on femtosecond laser-treated dentin researched the effect of surface shape , and different adhesives on the bond strength of dentin. Gerhardt-Szep et al. pointed out no significant difference in bond strength values of femtosecond laser-treated dentin groups for primer application or for primer + bond application. Portillo et al. reported that Er: YAG laser treatment and femtosecond laser treatment decreased the bond strength of dentin, compared to the control group, when used with two-step self-etch adhesives. According to the bond strength results of this research, as similar to the findings of other studies , , different surface treatments and bonding types did not affect the bond strength values ( P >0.05). Based on these findings, the second hypothesis was rejected. Therefore, based on the findings of this research, either surface treatment type (no treatment, Er: YAG, or femtosecond laser) can be used on the dentin surface before resin cementation. Despite the bond strength values were not significantly different among the groups, after laser application (Er: YAG or femtosecond laser), Clearfil SE Bond should be recommended instead of Clearfil SE Protect, because in both laser-treated groups, Clearfil SE Protect significantly decreased the mineral content of dentin in comparison to Clearfil SE Bond ( P <0.005) (Table ). Waidyasekera et al. pointed out that dentin decalcified by acids in self-etch adhesives is more sensitive to react with fluoride due to the increased porosity. They also stated that as Clearfil Protect Bond is a fluoride-ion-releasing adhesive system, fluoride ions are reported to increase the rate of calcium phosphate crystallization compound and decrease the rate of apatite dissolution and mineral content of dentin . This study had some limitations. The elemental composition of dentin was investigated after both the surface treatment, and the bonding application; however, the elemental composition of dentin before the surface treatment and the impact of aging on the bond strength of dentin were not investigated. In addition, the failure types after the bond strength test were not investigated. In this study, the mineral content of dentin decreased after the bonding application, compared to the surface-treated dentin. Therefore, to provide favorable interaction with the laser-modified dentin surfaces, future research should focus on developing Ca- and P-containing adhesive systems that contain no acidic agents. Further, the effect of aging on ion elution from these bonding agents to dentin and on dentin bond strength should also be investigated.
Based both on the mineral content and bond strength analysis of this study, either surface treatment type (control, Er: YAG laser or femtosecond laser) can be used on dentin surface before resin cementation. After laser treatment (Er: YAG or femtosecond laser), Clearfil SE Bond is recommended instead of Clearfil SE Protect, because it preserves more of the mineral content of dentin ( P <0.005).
|
Comparison of different noninvasive scores for assessing hepatic fibrosis in a cohort of chronic hepatitis C patients | 34a47e1f-489b-4823-89cf-c05b8df7156d | 11603190 | Biopsy[mh] | Hepatitis C virus (HCV) infection is one of the most important causes of chronic liver disease worldwide . According to a nationwide demographic health survey in 2015, Egypt had the highest prevalence of HCV antibody seropositivity (10%) and viremia (7%) worldwide . However, these numbers have changed, and the prevalence has decreased with the success of the countries’ strategy for combating HCV . Liver fibrosis as a consequence of chronic HCV continues to be a significant challenge that needs more research to find innovative methods for diagnosis, treatment, and follow-up . For a long time, liver biopsy was the gold standard for assessing the progression of liver fibrosis in HCV patients . However, due to significant advancements in HCV treatment in recent years as well as the well-described limitations and complications of liver biopsy, both patients and physicians are no longer accepting to perform liver biopsy , . In addition, even after HCV treatment, following up the stage of liver fibrosis is important not only to predict regression and improvement but also to prioritize surveillance for complications such as HCC and portal hypertension . Accordingly, this need led to finding reliable, accessible, noninvasive, and acceptable alternatives for liver biopsy in evaluating liver fibrosis, like serum biomarkers and imaging modalities . Many well-established scores of noninvasive serum biomarkers have been used in clinical practice to assess the stage of hepatic fibrosis before the direct-acting antiviral (DAA) therapy and have been supported by many international guidelines , . The accuracy of noninvasive clinical and laboratory scores varies according to the underlying etiology of liver disease. Several indexes are widely explicitly used in patients with chronic HCV, such as the aspartate aminotransferase (AST)-to-platelet ratio index (APRI) and the fibrosis score 4 (FIB4) index , . Therefore, this study aimed to evaluate the diagnostic performance and accuracy of six serological noninvasive scores compared to liver histopathology for staging liver fibrosis in a large retrospective cohort of Egyptian naive chronic HCV patients.
Population In this retrospective cohort study, we screened data of 31,659 chronic HCV patients who underwent percutaneous liver biopsy for the study eligibility. Liver biopsy was a treatment prerequisite prior to antiviral therapy according to the standardized national protocol for the treatment of HCV in Egypt before 2017. Patients had to undergo liver biopsy as a pretreatment requirement during the interferon era (2006–2014). Moreover, with the early introduction of DAAs in Egypt in 2014, liver biopsy was utilized to prioritize treatment for those with advanced fibrosis . Individual patient data were collected from the National Network of Treatment Centers (NNTC), which connects all viral hepatitis treatment centers in Egypt. All patients were included in the national treatment program according to the protocol issued by the National Committee for Control of Viral Hepatitis (NCCVH), which was regularly updated. HCV infection was confirmed by detecting anti-HCV antibodies using enzyme-linked immunosorbent assay (ELISA) and quantitative HCV RNA using PCR. Treatment-experienced patients, those who are co-infected with hepatitis B (HBV) or human immune deficiency (HIV) virus infection, patients with other liver diseases, patients with hepatocellular carcinoma (HCC) or other malignancies, those with decompensated liver cirrhosis, pregnant or lactating women, were all excluded from the analysis. Patient demographics, medical history, clinical assessment, and laboratory investigations were extracted from the database. The laboratory tests included alanine aminotransferase (ALT), aspartate aminotransferase (AST), gamma-glutamyl transferase (GGT), serum bilirubin, serum albumin, alkaline phosphatase (ALP), INR, complete blood test (CBC), random blood glucose level and hemoglobin A1C (HbA1C) test. The upper limits of normal reference ranges used in our analysis were as follows: ALT (40 U/L), AST (40 U/L), and platelets (450 × 10^9/L). The study was performed in accordance with the principles of the Declaration of Helsinki and was approved by the NCCVH and the Institutional Review Board (IRB) of the Faculty of Medicine, Helwan University (serial: 77–2023). The Ethics Committee of the Faculty of Medicine, Helwan University, waived the necessity for informed consent for the study because of its retrospective nature. Liver histopathology results Data on histopathology were extracted from the database. As per the requirements for liver biopsy according to NCCVH guidelines, every patient underwent an ultrasound-guided liver biopsy from the right hepatic lobe using a 16-gauge needle. The tissue samples were subjected to fixation using formalin, followed by embedding in paraffin. Subsequently, staining was performed using Hematoxylin and Eosin (H&E) and reticulin silver using the Masson trichrome method for histopathological assessment. According to the applied guidelines, two expert pathologists must review liver biopsy reports independently. The stage of fibrosis was scored according to the METAVIR scoring system on a 5-point scale: F0 = no fibrosis; F1 = portal fibrosis without septa; F2 = portal fibrosis with few septa; F3 = numerous septa without cirrhosis and F4 = cirrhosis , . Accordingly, significant fibrosis was referred to as F2, advanced fibrosis as F3, and cirrhosis as F4. Calculation of different liver fibrosis scores The following scores were calculated directly using the original papers’ equations listed in Table . Statistical analysis Data was collected and analyzed using SPSS (Statistical Package for the Social Sciences, version 20, IBM, and Armonk, New York). The Shapiro test was used to determine the data compliance with normal distribution. Quantitative data with normal distribution are expressed as mean ± standard deviation (SD), while quantitative data with abnormal distribution expressed as median (25 th −75 th quartile) and compared by Mann–Whitney U test was used. Nominal data are given as numbers (n) and percentages (%). The diagnostic performance of different noninvasive models was determined by the area under the receiver operator characteristics (ROC) curve. Positive predictive values (PPV) and negative predictive values (NPV) were also obtained for the cutoff value of the test. The confidence level was kept at 95%; hence, the P value was considered significant if < 0.05.
In this retrospective cohort study, we screened data of 31,659 chronic HCV patients who underwent percutaneous liver biopsy for the study eligibility. Liver biopsy was a treatment prerequisite prior to antiviral therapy according to the standardized national protocol for the treatment of HCV in Egypt before 2017. Patients had to undergo liver biopsy as a pretreatment requirement during the interferon era (2006–2014). Moreover, with the early introduction of DAAs in Egypt in 2014, liver biopsy was utilized to prioritize treatment for those with advanced fibrosis . Individual patient data were collected from the National Network of Treatment Centers (NNTC), which connects all viral hepatitis treatment centers in Egypt. All patients were included in the national treatment program according to the protocol issued by the National Committee for Control of Viral Hepatitis (NCCVH), which was regularly updated. HCV infection was confirmed by detecting anti-HCV antibodies using enzyme-linked immunosorbent assay (ELISA) and quantitative HCV RNA using PCR. Treatment-experienced patients, those who are co-infected with hepatitis B (HBV) or human immune deficiency (HIV) virus infection, patients with other liver diseases, patients with hepatocellular carcinoma (HCC) or other malignancies, those with decompensated liver cirrhosis, pregnant or lactating women, were all excluded from the analysis. Patient demographics, medical history, clinical assessment, and laboratory investigations were extracted from the database. The laboratory tests included alanine aminotransferase (ALT), aspartate aminotransferase (AST), gamma-glutamyl transferase (GGT), serum bilirubin, serum albumin, alkaline phosphatase (ALP), INR, complete blood test (CBC), random blood glucose level and hemoglobin A1C (HbA1C) test. The upper limits of normal reference ranges used in our analysis were as follows: ALT (40 U/L), AST (40 U/L), and platelets (450 × 10^9/L). The study was performed in accordance with the principles of the Declaration of Helsinki and was approved by the NCCVH and the Institutional Review Board (IRB) of the Faculty of Medicine, Helwan University (serial: 77–2023). The Ethics Committee of the Faculty of Medicine, Helwan University, waived the necessity for informed consent for the study because of its retrospective nature.
Data on histopathology were extracted from the database. As per the requirements for liver biopsy according to NCCVH guidelines, every patient underwent an ultrasound-guided liver biopsy from the right hepatic lobe using a 16-gauge needle. The tissue samples were subjected to fixation using formalin, followed by embedding in paraffin. Subsequently, staining was performed using Hematoxylin and Eosin (H&E) and reticulin silver using the Masson trichrome method for histopathological assessment. According to the applied guidelines, two expert pathologists must review liver biopsy reports independently. The stage of fibrosis was scored according to the METAVIR scoring system on a 5-point scale: F0 = no fibrosis; F1 = portal fibrosis without septa; F2 = portal fibrosis with few septa; F3 = numerous septa without cirrhosis and F4 = cirrhosis , . Accordingly, significant fibrosis was referred to as F2, advanced fibrosis as F3, and cirrhosis as F4. Calculation of different liver fibrosis scores The following scores were calculated directly using the original papers’ equations listed in Table .
The following scores were calculated directly using the original papers’ equations listed in Table .
Data was collected and analyzed using SPSS (Statistical Package for the Social Sciences, version 20, IBM, and Armonk, New York). The Shapiro test was used to determine the data compliance with normal distribution. Quantitative data with normal distribution are expressed as mean ± standard deviation (SD), while quantitative data with abnormal distribution expressed as median (25 th −75 th quartile) and compared by Mann–Whitney U test was used. Nominal data are given as numbers (n) and percentages (%). The diagnostic performance of different noninvasive models was determined by the area under the receiver operator characteristics (ROC) curve. Positive predictive values (PPV) and negative predictive values (NPV) were also obtained for the cutoff value of the test. The confidence level was kept at 95%; hence, the P value was considered significant if < 0.05.
Revising the NCCVH database before 2017 found that 31,659 patients had registered liver biopsy results. Among this number, only 19,501 patients met the inclusion criteria for the study. (The flow chart of the study is shown in the Fig. ). Demographic and baseline data of the studied patients Of the studied patients, 11,463 (60.2%) were females. Up to 11% of the study group were diabetic. Baseline laboratory data and noninvasive markers of fibrosis are summarized in Table . Based on liver biopsy results, fibrosis stages were F0, F1, F2, F3, and F4 in 122 (0.60%), 1349 (7.1%), 6934 (36.4%), 7846 (41.2%), and 2800 (14.7%) patients, respectively. A total of 1471 (7.70%) patients have no fibrosis (F0-F1), and 18,030 (92.3%) patients have fibrosis (F2-F4). We categorized the enrolled patients according to fibrosis stages as patients with no fibrosis (F0-F1) and those with fibrosis (F2-F4), as shown in Table . Accordingly, the ROC curves were designated to determine the best cutoff values for the used six indexes that can discriminate significant fibrosis (≥ F2), advanced fibrosis (≥ F3), and cirrhosis (F4), as shown in supplementary tables 1,2 and 3. Fibrosis scores in relation to histopathology fibrosis stage All the six studied scores for fibrosis assessment showed significantly higher values among patients with fibrosis (F2-F4) compared to those with no fibrosis (F0-F1). Accuracy of the six scores in prediction of fibrosis stage (Fig. ) For the prediction of significant fibrosis ≥ F2, FIB-4 has the best diagnostic accuracy with 73.3% overall accuracy, and the area under the curve (AUC) is 0.70. Second to FIB-4 was the King’s score with diagnostic accuracy (67.7%) and AUC of 0.7. For predicting advanced fibrosis ≥ F3, FIB-4 has the highest accuracy for prediction of ≥ F3, at 66.2% with an AUC of 0.71, followed by King’s score of 66% accuracy with an AUC of 0.71. For the prediction of cirrhosis (F4), King’s score and FIB-4 have the best diagnostic accuracy for the prediction of F4 degree of fibrosis (76.5% and 75.9%, respectively), with an AUC of 0.82 for both. The accuracy of the six different scores in the prediction of fibrosis stages is illustrated in Fig. .
Of the studied patients, 11,463 (60.2%) were females. Up to 11% of the study group were diabetic. Baseline laboratory data and noninvasive markers of fibrosis are summarized in Table . Based on liver biopsy results, fibrosis stages were F0, F1, F2, F3, and F4 in 122 (0.60%), 1349 (7.1%), 6934 (36.4%), 7846 (41.2%), and 2800 (14.7%) patients, respectively. A total of 1471 (7.70%) patients have no fibrosis (F0-F1), and 18,030 (92.3%) patients have fibrosis (F2-F4). We categorized the enrolled patients according to fibrosis stages as patients with no fibrosis (F0-F1) and those with fibrosis (F2-F4), as shown in Table . Accordingly, the ROC curves were designated to determine the best cutoff values for the used six indexes that can discriminate significant fibrosis (≥ F2), advanced fibrosis (≥ F3), and cirrhosis (F4), as shown in supplementary tables 1,2 and 3.
All the six studied scores for fibrosis assessment showed significantly higher values among patients with fibrosis (F2-F4) compared to those with no fibrosis (F0-F1).
) For the prediction of significant fibrosis ≥ F2, FIB-4 has the best diagnostic accuracy with 73.3% overall accuracy, and the area under the curve (AUC) is 0.70. Second to FIB-4 was the King’s score with diagnostic accuracy (67.7%) and AUC of 0.7. For predicting advanced fibrosis ≥ F3, FIB-4 has the highest accuracy for prediction of ≥ F3, at 66.2% with an AUC of 0.71, followed by King’s score of 66% accuracy with an AUC of 0.71. For the prediction of cirrhosis (F4), King’s score and FIB-4 have the best diagnostic accuracy for the prediction of F4 degree of fibrosis (76.5% and 75.9%, respectively), with an AUC of 0.82 for both. The accuracy of the six different scores in the prediction of fibrosis stages is illustrated in Fig. .
Several noninvasive markers for assessing liver fibrosis have emerged over the last few years and are extensively utilized in clinical practice. Our study aimed to evaluate and compare the diagnostic performance and accuracy of six noninvasive fibrosis scores and indices in chronic HCV patients (FIB-4, APRI, King’s score, Fibro-Q, fibrosis index, Fibro-α score). All the studied scores were statistically significant and valid for predicting different stages of liver fibrosis. While these non-invasive markers are valuable tools for assessing liver fibrosis in chronic HCV patients, it is important to note that they do not accurately reflect histological inflammation, which typically requires liver biopsy for precise diagnosis and staging , . FIB-4 and APRI are the most common noninvasive scores validated and widely used as useful tools for diagnosing advanced fibrosis (F3) and cirrhosis (F4) in chronic HCV patients . The diagnostic accuracy of FIB-4 has been assessed in various studies compared to liver biopsy results in chronic HCV mono-infected patients. Most of these studies confirmed that FIB-4 at a cutoff less than 1.45 can accurately exclude significant fibrosis with (74.3%, 80%, and 94.7%) sensitivity, specificity, and negative predictive value, respectively. On the other hand, the FIB-4 cutoff > 3.25 can accurately confirm the presence of advanced fibrosis with a sensitivity of 82.1% and specificity of 98.2% , , . Other studies reported lower FIB-4 cutoffs of 2.9 and 2.25 for predicting advanced fibrosis , . In our results, the optimal FIB-4 cutoff for predicting advanced fibrosis stage ≥ F3 was 2.01 (sensitivity 65.6%, specificity 66.9%) with AUROC 0.71. This was comparatively lower than the previously reported cutoffs and AUROC in the aforementioned studies. Despite that, our new cutoff demonstrated better accuracy in diagnosing advanced fibrosis in our population. Furthermore, we reported a cutoff value of 2.21 for FIB-4 to detect cirrhosis (F4), with AUROC of 0.82, PPV of 83%, and (sensitivity of 77%, specificity of 74%). Although most studies could not establish distinct cutoff values to discriminate between advanced fibrosis (F3) and cirrhosis (F4) , , , our analysis was able to identify values that discriminate both stages. This result is in accordance with other Egyptian studies conducted on similar populations , . Several studies have proposed a validated APRI threshold of 0.5 for the prediction of significant fibrosis in patients with chronic HCV infection with a sensitivity of 77%–86% and a specificity of 49%–65%. Another proposed cutoff is 1.5, with a sensitivity of 32%–47% and a specificity of 89%–94% , , , . In our cohort, the cutoffs values of APRI were > 0.55 (sensitivity 67%, specificity 59%), > 0.71 (sensitivity 63%, specificity 65%) and > 0.88 (sensitivity 65%, specificity 66%) for prediction of > F2, > F3, > F4, respectively. These values were low compared to the findings of Rungta et al., who used cutoffs of 1.2 and 1.5 to discriminate significant fibrosis (F2) from advanced fibrosis (F3), respectively . A meta-analysis that included 40 studies with 8739 patients concluded that the APRI optimal cutoff value is > 0.7 with AUROC 0.77 (sensitivity 77% and specificity 72%), which had a better performance for predicting F2. In addition, the cutoff value 1.0 had a sensitivity of 61% and a specificity of 64%, with AUROC 0.80 for the prediction of F3. Moreover, it was reported that the recommended APRI lower cutoff value of 1.0 for the prediction of cirrhosis had 76% sensitivity and 72% specificity, while the higher recommended cutoff of 2.0 had 46% sensitivity and 91% specificity , . We agreed in our study that FIB-4 remains superior to APRI in the prediction of different stages of fibrosis in HCV patients, and this was consistent with the results of Bonnard et al ., in his Egyptian cohort study that was the first in Egypt to evaluate noninvasive measures for fibrosis assessment in a similar population to ours. Bonnard and his colleagues reported lower cutoff values for both APRI and FIB-4 compared to ours and concluded that FIB-4 has better performance than APRI for predicting different fibrosis stages in the Egyptian population . The discrepancies in FIB-4 and APRI diagnostic accuracy and cutoff values between our results and other previously published studies can have various explanations. Firstly, as an extensive real-life study, the distribution of different fibrosis stages in our cohort was unequal. Only 14.7% of our patients had cirrhosis, and 7.7% were in the (F0 and F1 stage) while most of our study population were in the (F2 and F3) stages (77.6%). This unequal distribution may impact the sensitivity and specificity of any diagnostic approach. Second, the reproducibility of the measurement of these scores is influenced by parameters included in their calculated formula, such as age, AST, or ALT. Of note, most of the Egyptian population affected by the HCV epidemic were of a particular age group (relatively older) because they shared the same risk factor of acquiring the disease at the same period . Finally, the high necro-inflammatory activity may increase transaminase levels, subsequently affecting the accuracy of the used scores . Although FIB-4 and APRI are reliable, easy, and rapid formulas for staging liver fibrosis, both scores should be used cautiously in patients with highly elevated liver enzymes or with evidence of increased necro-inflammatory activity. King’s score results in our study were very promising. Considering its AUC, diagnostic accuracy, sensitivity, and specificity, it performs very well predicting different fibrosis stages and is superior to APRI and after FIB-4. Studies with different diagnostic accuracy and variable cutoffs of King’s score were published , , . Our study showed the highest AUC (0.82) for the prediction of cirrhosis by King’s score at a cutoff of 17.4 with good sensitivity and specificity (79% and 72%, respectively). The discrepancies in cutoffs across studies could be attributed to the use of reference histopathological fibrosis staging system in these studies. Our study relied on METAVIR classification (F0-F4), while most other studies used the Ishak classification system (F0-F6). Our study showed that Fibro Q had a better performance (PPV 95%) than APRI and was comparable to FIB-4 accuracy with optimal cutoff > 2.24 with AUROC 0.68 in the prediction of significant fibrosis > F2. This result agreed with Hsieh et al ., who demonstrated that Fibro-Q had better accuracy than APRI using cutoff > 1.6 with AUROC 0.78 for predicting significant fibrosis and low diagnostic accuracy in the prediction of cirrhosis. Unfortunately, there are still limited published studies regarding the Fibro-Q score for prediction of different stages of fibrosis, which need further studies to validate the usefulness of Fibro-Q in clinical practice. Fibro index (FI) was initially designated to diagnose liver fibrosis related to HCV infection; it showed a high median AUROC of 0.86 with high PPV (90%), giving a better accuracy for the prediction of cirrhotic patients compared to APRI and Forn’s index , . A systematic review by Chou et al. showed that the median AUROC for cirrhosis was 0.86. Compared to APRI, both scores had a similar performance for detecting cirrhosis . In our study, FI had a similar performance to APRI for predicting cirrhosis with AUROC 0.79 (sensitivity 73% and specificity 71%). This was in line with the results of Chou et al. Furthermore, FI had lower performance than FIB-4, APRI, and King’s score for predicting significant and advanced cirrhosis. In our study, Fibro-alpha score performance in predicting different fibrosis stages was not good compared to other scores with AUROC 0.54, 0.52, and 0.56 for predicting significant fibrosis, advanced fibrosis, and cirrhosis, respectively. Conversely, Omran et al . and Attallah et al . reported higher diagnostic performance for this score in predicting different fibrosis stages and suggested that it can be used as a valuable tool for predicting liver cirrhosis in chronic HCV patients. The current study has several points of strength. First is the significant number of included patients who performed liver biopsies (19,501 patients), with a variable spectrum of liver fibrosis stages. Second, we depended on liver biopsy results for all patients as a gold standard reference for liver fibrosis staging. Moreover, the exclusion of liver diseases other than chronic HCV was done to avoid misinterpretation of the liver biopsy results. On the other hand, it is essential to note that our study has some limitations. First, we have a small percentage of cirrhotic patients (F4) of 14.7%, compared to patients with advanced fibrosis (≥ F3) 41%, which could introduce some heterogeneity. This was mainly because of the real-life nature of the study, which recruited all legible patients referred for treatment with antiviral therapy. Second, because of the large number of patients included and the multicentric nature of the study, we could not rely on a single pathologist to interpret all biopsies. However, this was relatively adjusted by considering a consensus of 2 pathologists for each biopsy report. Finally, the retrospective nature of the study could bring some bias. Nevertheless, our study utilized well-documented cases from the medical records and database, meeting our predetermined inclusion criteria. The large number of available records enabled us to assess the scope of our study accurately. In conclusion, among the six validated scores, four scores (FIB-4, King’s score, APRI, and Fibro Q) had better diagnostic performance for predicting different fibrosis stages in chronically infected HCV patients. However, our study supports using FIB-4, followed by King’s score, to identify patients with advanced fibrosis who could be prioritized for surveillance, follow-up, and monitoring of complications. Using more than one score could be considered, especially in primary healthcare settings and limited-resources areas, to rapidly stratify patients who need more care and referral to specialized centers.
Supplementary Information.
|
Addressing Health Illiteracy and Stunting in Culture-Shocked Indigenous Populations: A Case Study of Outer Baduy in Indonesia | 1fb372a5-ed9a-4e2b-98e1-e29efd6b44c9 | 11431049 | Health Literacy[mh] | Many countries struggle with high levels of health illiteracy and low effectiveness of health campaigns for their citizens . This issue is particularly pronounced among Indigenous populations, who are often targeted by educational activities designed to reduce health illiteracy . Indigenous peoples are frequently categorized as minority groups due to their exclusion from regional political, security, and economic systems . Health illiteracy has significant negative impacts on health outcomes, increasing the likelihood of incorrect health practices. One severe consequence of this is stunting. A child can experience stunting if they do not receive the minimum nutritional requirements needed for normal growth. Factors such as low health literacy, poverty, and poor surveillance systems can contribute to inadequate nutrition, leading to stunting . Research indicates that Indigenous people are particularly vulnerable to stunting due to the combined effects of health illiteracy, poverty, and insufficient surveillance within their communities . Studies of various Indigenous populations, such as the First Nations in Canada and the Aboriginal and Torres Strait Islanders in Australia , demonstrate that health literacy is a crucial predictor of health quality. However, Indigenous communities often face cultural, language, and policy barriers, as well as economic isolation and inequality, which make them difficult to reach through modern health promotion and education efforts . In Indonesia, the Baduy community from Banten province is one such Indigenous group facing challenges related to health literacy and stunting. The Baduy people live in Kanekes, a mountainous village in the centre of the province, about 100 km from the provincial capital, Serang, and 150 km from Tangerang City, which is part of the Greater Jakarta area. The map below shows the location of the Baduy community. Literacy is generally a person’s ability to read and write . However, the definition of health literacy is more complex. Health literacy is “a broad range of skills and competencies that individuals develop to search for, understand, evaluate, and use health information and concepts to make informed choices, reduce health risks, and improve quality of life” . Limited health literacy is a silent killer because it secretly causes individual deaths and economic damage in society. In line with this, the benefits of health literacy reach all life activities, such as household, work, society, and culture. The level of health literacy is influenced by language, culture, and social capital factors . Media use is also associated with health literacy because media is closely related to development, health behaviour, and literacy in general. Together with the education system and health system, as well as the influence of family and peers, mass media affects health literacy through changes in individual characteristics, such as language skills, culture, education, social skills, cognitive skills, physical abilities, and media use. Health literacy then determines health behaviour, health costs, and individuals’ use of health services . Political or civil literacy also plays a crucial role in health literacy, enabling citizens to become aware of health issues and engage in decision-making through civic and social channels . Health communicators can be individuals experiencing health problems or those who feel it is important to communicate health information to others. These individuals face challenges, such as difficulty explaining their illness, asking questions, and feeling embarrassed about their health issues . An exploratory study on health communication with African American prostate cancer survivors and their families revealed that the health issue that is easiest to communicate is the issue of death, followed by the issue of illness. They ranked six health topics from most to least avoided, and death was the least avoided issue. Issues that are less easy to communicate are the health problems in social relationships and the financial aspects of health. The issues that are most difficult to communicate are sexual and marital health issues, as well as health for non-heterosexual groups. Health communication often starts within the family, with women typically being more approachable for health-related discussions than men. As a result, women frequently take on the role of health communicators or caregivers within families . Even though reliable information is available from the government , the initial and most frequent health communication occurs at the family level, where health concerns are discussed, information is shared, and healthcare decisions are made. From the perspective of non-sufferers, health workers are often highlighted as key communicators. Effective health communication skills in health workers can ensure that patients or the public understand necessary health information, address feelings of uncertainty, and build healing relationships . In Indonesia, health policy defines stunting as impaired growth and development due to chronic malnutrition and recurrent infections, characterized by body length or height below set standards . Stunting is widespread in the Baduy community. A 2009 report by Anwar and Riyadi indicated that the prevalence of stunting in the Baduy community was 60.6%, much higher than the national average at that time, which was only 36.8%. Meanwhile, data from 2022 showed that the prevalence of stunting in this community was 54% , which shows a low decrease when considered at the country level. The stunting prevalence in Indonesia is already down to 21.6% . This research takes a health communication approach to address health literacy and stunting in the Baduy community. The study aims to determine the factors, impacts, and solutions for health illiteracy and stunting in the Outer Baduy of Kanekes Village. The article proceeds as follows. describes the sociocultural characteristics of the Outer Baduy community. presents the qualitative research methodology used in this exploratory study. highlights the empirical findings related to stunting literacy. Finally, discusses the research implications for academics and practitioners, focusing on Indigenous communities such as the Baduy.
Culturally, Baduy society is divided into Inner Baduy ( Baduy Dalam ) and Outer Baduy ( Baduy Luar ). The Inner Baduy community is classified as “earth hermits”, a group of people who adhere strongly to conservative principles by not changing their way of life over time and not having any contact with outsiders. Natural resources and the environment are carefully preserved because they are the only mainstay of people’s lives . There are predictions that the Baduy people, especially the Inner Baduy, will soon become extinct due to being too strict and not opening themselves up to the outside world. Meanwhile, the Outer Baduy community is a community that still strictly maintains its customs but has opened up a little space for transactional relationships and does not lose its customs with outside communities . As an Indigenous society, the Baduy people hold various types of taboos. Taboos in Baduy society are of three types, namely to protect the soul (human), to safeguard the purity of the mandala (motherland), and to preserve the purity of traditions. An example of a taboo to ensure the purity of tradition is the script taboo, namely the prohibition against learning Latin and Arabic letters. Old Sundanese letters are still permitted, and several tangtu (traditional leaders) know cacarakan (Sundanese alphabet) . According to the Baduy’s beliefs, their lifestyle is “asceticism in nagara (country), asceticism in the kingdom, asceticism in the mandala (motherland), and asceticism in the holy land (their village) ”. They lived ascetically, which meant farming according to their ancestors’ rules. The puun (chief spiritual leader) ensures the community adheres to these rules. There is one puun for each village in Inner Baduy. So, there is no single leader, only these three puuns . The Outer Baduy area does not surround the geographical area of Inner Baduy. The southern area of Kanekes village directly connects the Inner Baduy area with the Outer Baduy area without going through the Outer Baduy area. However, this area is naturally challenging to penetrate because it is hilly, with two hills, namely Kendeng and Hoe. The Inner Baduy area includes the villages of Cikartawana, Cibeo, and Cikeusik. However, the geographical areas with exact boundaries, like today, are only approximate and can change over time. This change is because the villages in Kanekes Village can change position for various reasons. A village can move because of a fire that damaged many wooden houses or because of an idea from their ancestors to spiritual leaders who asked them to move . Most of the Baduy people live as farmers. However, the Indonesian government has made Kanekes a cultural tourism destination so that there is additional financial income for the Baduy community from this sector. Apart from that, many visitors from the outside come to see the life of the Baduy people. They are also widely known as one of the ethnic groups in Indonesia that still adhere to solid customs and traditions. The Baduy people refuse to be called an isolated community and have committed to planning their future with the government and outside communities . The Baduy community, like Indigenous communities in other areas of Banten, has relations with the local government. For traditional leaders, annual ceremonies, such as seba (walking together to visit the government official in the city), are an effort to maintain and preserve the community’s cultural identity and a strategic communication medium for dialogue between Indigenous communities and the government .
3.1. Design This research employs a case-study approach, a qualitative method where the researcher investigates a specific system (case) or multiple systems (cases) over time through in-depth data collection from various sources . The purpose is to provide a detailed understanding of the factors, impacts, and solutions related to health illiteracy and stunting in the Outer Baduy of Kanekes Village. Given its objectives, this research falls under exploratory case-study research . It focuses solely on the Outer Baduy community without comparing other cases, so it qualifies as a single case study . Furthermore, this study is intrinsic and was selected based on the case’s uniqueness rather than its representativeness or comparability with other cases 3.2. Participants and Setting The setting for this research is the Baduy community in Kanekes Village, specifically in the hamlets of Ciranji Pasir, Ciranji Lebak, Cijanar, Ciemes, and Cibagelut. These locations were chosen because they are near the village health post, managed by the Indonesian Volunteer Friends (SRI), an NGO. This health post, established on 20 November 2021, replaced a defunct government health post and continues to detect many stunting cases in the area (see ). Six participants were selected using a snowball sampling technique. Key individuals were initially identified and then referred to others for interviews. A key individual in this research was a midwife from the Dompet Dhuafa Foundation who worked as a volunteer to serve the Baduy people. She referred four people, including the head of the NGO SRI, a midwife from Dompet Dhuafa, and two mothers who have stunted children from the Baduy community. The head of the NGO SRI referred to the head of the Lebak Regency social service, while the local mother referred to a community’s neighbourhood leader and another mother. According to Creswell , this purposive sampling technique identifies rich-information cases from people with specific knowledge or experience relevant to the phenomenon under study—in this case, the factors, impacts, and solutions related to health illiteracy and stunting in the Outer Baduy of Kanekes Village. A brief analysis of the interview data revealed that theoretical saturation was achieved among six participants. No new information emerged from the seventh participant, another mother of stunted children, indicating that further data collection would not yield new insights, thus concluding the data-collection process. 3.3. Data Collection Data-collection techniques included semi-structured interviews and secondary data analysis. Semi-structured interviews allowed for a directed exploration of causal chains in the case study. Three groups of informants were interviewed: Baduy community members (including two mothers with stunted children, referred to as Baduy 1 and Baduy 2), one Baduy community leader (at the neighbourhood level), and external stakeholders (an NGO officer, a midwife, and a government social service officer). The interviews covered factors associated with low health literacy, its impact, and strategies to improve health literacy . Secondary data were obtained from a YouTube video titled “Multidisciplinary Talkshow: Stunting in Baduy”, featuring researchers and a health worker who provided services to prevent stunting in the Baduy community. Relevant researchers included one archaeologist, one dentist, and one nurse. The Directorate of Community Service and Empowerment, University of Indonesia (DPPM—UI), created the video to document their community service program. 3.4. Ethical Considerations The participants were informed about all study activities, and informed consent was obtained before data collection. The participants were assured of their right to request additional information and were told that their data would be confidential and accessible only to the researcher. An approval letter was obtained from the Padjajaran University Research Ethics Committee (KEP UNPAD) for performing research on the Baduy community. 3.5. Data Analysis Data analysis was conducted using a holistic approach, which involves a comprehensive examination of the entire case or certain aspects . This research employed holistic analysis through two main steps, namely description and thematization. The case was described in detail, from its history to daily activities, and critical issues were identified to gain a deeper understanding of the case’s complexity. Themes were developed from transcriptions and coded interview texts. The research addressed dependability, confirmability, transferability, and credibility to maintain quality . Dependability is addressed through method triangulation, using interviews and video to verify data consistency. Confirmability is achieved through confirmability audits, re-checking data, and analysis results for consistency and neutrality. Transferability is how generalizable research results are in other contexts . Transferability is addressed by providing a detailed description of contextual factors, enabling comparisons. Finally, credibility is maintained through source triangulation, comparing interview results from different sources and member-testing methods to accurately interpret participants’ responses . Member testing is conducted by accurately interpreting the interviewee’s reactions during the research process .
This research employs a case-study approach, a qualitative method where the researcher investigates a specific system (case) or multiple systems (cases) over time through in-depth data collection from various sources . The purpose is to provide a detailed understanding of the factors, impacts, and solutions related to health illiteracy and stunting in the Outer Baduy of Kanekes Village. Given its objectives, this research falls under exploratory case-study research . It focuses solely on the Outer Baduy community without comparing other cases, so it qualifies as a single case study . Furthermore, this study is intrinsic and was selected based on the case’s uniqueness rather than its representativeness or comparability with other cases
The setting for this research is the Baduy community in Kanekes Village, specifically in the hamlets of Ciranji Pasir, Ciranji Lebak, Cijanar, Ciemes, and Cibagelut. These locations were chosen because they are near the village health post, managed by the Indonesian Volunteer Friends (SRI), an NGO. This health post, established on 20 November 2021, replaced a defunct government health post and continues to detect many stunting cases in the area (see ). Six participants were selected using a snowball sampling technique. Key individuals were initially identified and then referred to others for interviews. A key individual in this research was a midwife from the Dompet Dhuafa Foundation who worked as a volunteer to serve the Baduy people. She referred four people, including the head of the NGO SRI, a midwife from Dompet Dhuafa, and two mothers who have stunted children from the Baduy community. The head of the NGO SRI referred to the head of the Lebak Regency social service, while the local mother referred to a community’s neighbourhood leader and another mother. According to Creswell , this purposive sampling technique identifies rich-information cases from people with specific knowledge or experience relevant to the phenomenon under study—in this case, the factors, impacts, and solutions related to health illiteracy and stunting in the Outer Baduy of Kanekes Village. A brief analysis of the interview data revealed that theoretical saturation was achieved among six participants. No new information emerged from the seventh participant, another mother of stunted children, indicating that further data collection would not yield new insights, thus concluding the data-collection process.
Data-collection techniques included semi-structured interviews and secondary data analysis. Semi-structured interviews allowed for a directed exploration of causal chains in the case study. Three groups of informants were interviewed: Baduy community members (including two mothers with stunted children, referred to as Baduy 1 and Baduy 2), one Baduy community leader (at the neighbourhood level), and external stakeholders (an NGO officer, a midwife, and a government social service officer). The interviews covered factors associated with low health literacy, its impact, and strategies to improve health literacy . Secondary data were obtained from a YouTube video titled “Multidisciplinary Talkshow: Stunting in Baduy”, featuring researchers and a health worker who provided services to prevent stunting in the Baduy community. Relevant researchers included one archaeologist, one dentist, and one nurse. The Directorate of Community Service and Empowerment, University of Indonesia (DPPM—UI), created the video to document their community service program.
The participants were informed about all study activities, and informed consent was obtained before data collection. The participants were assured of their right to request additional information and were told that their data would be confidential and accessible only to the researcher. An approval letter was obtained from the Padjajaran University Research Ethics Committee (KEP UNPAD) for performing research on the Baduy community.
Data analysis was conducted using a holistic approach, which involves a comprehensive examination of the entire case or certain aspects . This research employed holistic analysis through two main steps, namely description and thematization. The case was described in detail, from its history to daily activities, and critical issues were identified to gain a deeper understanding of the case’s complexity. Themes were developed from transcriptions and coded interview texts. The research addressed dependability, confirmability, transferability, and credibility to maintain quality . Dependability is addressed through method triangulation, using interviews and video to verify data consistency. Confirmability is achieved through confirmability audits, re-checking data, and analysis results for consistency and neutrality. Transferability is how generalizable research results are in other contexts . Transferability is addressed by providing a detailed description of contextual factors, enabling comparisons. Finally, credibility is maintained through source triangulation, comparing interview results from different sources and member-testing methods to accurately interpret participants’ responses . Member testing is conducted by accurately interpreting the interviewee’s reactions during the research process .
First, we will describe the factors associated with the low health literacy of the Outer Baduy community and its impact on this community. Second, we present the results of a qualitative analysis that produces five themes that describe the strategies that stakeholders can use to overcome health literacy in the Outer Baduy community, including developing the health literacy of community leaders, managing information-technology-based health-information groups, always having at least one health worker present in the community, encouraging joint reflection when health cases occur, and balancing gender communication. The table below summarises the analysis’s themes, subthemes, and categories. The data were triangulated by comparing the interview and the video. We found no conflicting information. Both sources complement each other. 4.1. Factors Associated with Low Health Literacy in the Baduy Community The Baduy people’s health illiteracy is closely related to broader illiteracy due to their inability to read. The role of this general illiteracy is revealed in the narrative of the Baduy mother herself when she had to ask the midwife repeatedly about the writing on the family planning acceptor card: “ If I do not repeat it, I was afraid I will not understand the writing on the card, so I ask Teh Ira ”. (Baduy 2) SRI volunteers revealed that customs regarding the prohibition of eating certain foods were a limiting factor because some of the prohibited foods, such as beef and chicken, were quite nutritious. This prohibition discourages people from seeking health information related to nutrition and its role in growth and health: “ We use Baduy as the stunting pilot project site because this tribe emphasizes many prohibitions related to food, related to customs that must not be violated ”. (NGO) The strictness of this prohibition was also expressed by a source from the Lebak Social Service who described the strict prohibition regarding food so that people refused large amounts of additional food provided by the government: “ At that time, the Ministry of Social Affairs had food-giving activities in Baduy. We warned that Baduy people only need rice, salted fish, and shrimp paste. Nevertheless, they sent all other foods. Hence, many containers of food came just to be rejected by the Baduy at that time ”. (The Government) A solution to the food problem is to give eggs to the Baduy family. Eggs are a good protein source and are not prohibited by customary law. However, the condition of the families was so bad that the eggs, originally provided for the children, were consumed by their parents: “ We are ashamed. [The egg] should have been provided by their parents. We are also ashamed because the eggs are provided for stunted children, but their parents eat them ”. (Punggawa) From the time dimension, Baduy people have difficulty increasing literacy because they spend time in the fields, both the husband and the wife. Midwife Ira revealed that she could not provide education to the community because the community was not at home. They work in the fields for their daily food needs: “ Well, the target does not like anything like that, ma’am. For example, I want to provide additional food like that, giving eggs or holding a gathering for education and providing additional food like that. The target is in the huma [rice field], in the fields, so there are only 1 or 2. So, it is less effective here because of the people. They do not stay home every day, so they are also busy with their farming activities. Their livelihood is farming, right, ma’am? If they do not farm, they cannot eat because they earn money there; that is how it is ”. (Midwife) Midwife Ira revealed that, in general, people learn by imitating what their parents did before rather than finding out health information about whether an action is appropriate or not: “ And also looking at the elders’ previous [bad] habits, yes, they have been followed [uncritically by the community], so that is [a sign of] a lack of education, huh ”. (Midwife) Meanwhile, a professor of archaeology at the University of Indonesia who took part in community service activities for the Baduy community emphasized that access is one of the factors associated with the low health literacy of the Baduy community. The Inner Baduy community is difficult to access, so health-literacy programs cannot be implemented. At the same time, the Outer Baduy community is more easily accessible, so the community is also more literate about health: “ Well, what is interesting about our survey findings is that we divided three areas so there is Inner Baduy, and this is the area where access is difficult, so if we look for it, it is also problematic; there is also little knowledge about health and stunting. That is the characteristic of what is Inner Baduy. However, for Outer Baduy, communication is accessible because their knowledge regarding various kinds of programs has been widely accepted. On the other hand, because it is easy to access, many government, private, or community programs are carried out in the village. This village, so yes, is in the villages of Kaduketuk and then Cijahe and the surrounding areas, and the people are already used to receiving programs ”. (Archaeologist) Another external factor is the lack of consistency from external parties in increasing the literacy of the Baduy community. The programs provided are unsustainable, so they are less effective in increasing community literacy. The head of one neighbourhood ( punggawa ) said: “ Someone once told about stunting, but there was no further follow-up ”. (Punggawa) Finally, another interesting cultural factor is gender segregation. The researcher questions the understanding of the problem by asking if it is understood. Baduy 1 responds, indicating a lack of understanding and suggesting that men might understand it better: People usually come here from the health centre or even the health department. They are people who are far away. Did they give health information? (The Author) No, maybe men understand . (Baduy 1) Baduy 1 insists that health information, especially about stunting, should be conveyed to men for clarity: “ If you talk like this [health information about stunting], if you want it to be clear, you have to convey it to a man ”. (Baduy 1) When asked why health information should be communicated to men, Baduy 1 expresses uncertainty about the exact reason. Baduy 1 attributes this preference to traditional gender roles and practices, mentioning that men typically handle such matters and that community meetings are attended exclusively by men: “ Don’t know. That’s a man’s business. They held a meeting together. You should talk about this with a man. If there is a community meeting here, all the people who come are men ”. (Baduy 1) The dialogue reflects cultural norms and gender roles within the Baduy community, where men are seen as the primary recipients and conveyors of important information. It highlights a potential barrier to effective health communication, as women might not be seen as appropriate recipients of health-related information despite their critical role in child rearing and health maintenance. 4.2. The Impact of Low Health Literacy in the Baduy Community The most pronounced impact of the low health literacy of the Baduy community is fatalism. Society views a health condition as destined, and it must be accepted without being corrected. For example, when asked why a mother’s child is stunted, the reason given is genetic rather than nutritional problems: “ His father’s [body] was small, so Sardin’s [body] was small ”. (Baduy 1) The fatalism of the Baduy people does not appear to be ideological, at least in terms of health, as revealed in the case of snake bites. The Baduy community understands that snakes are their greatest threat. Despite knowing the danger, they cannot avoid it because their work involves being in snake-prone areas. They feel resigned to their situation due to the lack of alternatives. So, there is a philosophical acceptance of death as something predestined. However, the introduction of medical intervention brings hope for improvement. “ So far, they have completely ignored the data regarding deaths due to snake bites, thinking that they have just given up. This belief [the folk legend that Baduy is immune to snake venom] is just a tactic from kokolot [community elders]. [he actual belief is that] when someone’s life is ended [by snake bites], it is predestined. There’s something [belief] like that. But when medical people come in and all that [health facilities], [they start to believe that] there are things that can be fixed ”. (NGO) Another impact is the high maternal and child mortality rate. This fact was revealed by a resource person from the University of Indonesia Hospital in a community service talk show: “ Well, yesterday, we looked at the data from the last few years. Yes, in the last two years and three years, Baduy had a high maternal mortality rate. Yes, up to 4 people per year. Well, this is quite extraordinary. Just one is quite high. Here, it is up to four every year. The child death figure is even higher. There are cases of child deaths, especially neonates. They died when they were born. Well, this case is also quite high. Between 9 and 14 children per year die in Baduy ”. (Nurse) Lack of health literacy also puts the Baduy community in a vicious circle. Low health literacy makes them less able to maintain their health and protect themselves from disease. This condition leads to health costs that they cannot afford. Ultimately, because of this inability, they become even more fatalistic, as shown in an interview with a Baduy’s neighbourhood leader: “ Sometimes there are things like this, ma’am. For example, the village used to be a tiny population. Even then, they said no matter how badly my child is sick, do not take him to the medical centre. Could you not take him to Rangkasbitung? It is expensive, so where do we get the money to pay the bill? That is the chatter of old people is like ”. (Punggawa) 4.3. Strategies to Improve Baduy Community Health Literacy 4.3.1. Developing the Health Literacy of Community Leaders The Baduy community still depends on leaders like Jaro and other traditional authorities. So, if community leaders have sufficient health literacy, they can pass it on to other community members. So far, community leaders have encouraged public health, but more aspects of health actions have not yet reached the health-literacy stage: “ The point is that Jaro and Puun also urge the Baduy people to consume nutritious food, but not force it because there is a clash of customs [such as eating healthy but prohibited food] that cannot be forced ”. (Midwife) The potential for Jaro to become a community literacy agent in the health sector is enormous because the community attends monthly meetings to discuss all matters related to customary law. In this event, recommendations for health literacy can be included: “ We were invited to a community gathering every month. The purpose of the meeting is to provide instructions to the village or the parents; everything is talked about and entrusted to the parents. Only recently has this happened. Violations of various kinds are reported, and then there are instructions, such as if you [want to] eat [certain] food [from outside], you must be careful [that the food contains prohibited or unhealthy items]. It was recently held with the community. Every month, the meeting is held ”. (Baduy 1) 4.3.2. Managing Information-Technology-Based Health-Information Groups Even though the Baduy people live traditionally, they still have cell phones and use them mainly for business purposes . One of the central aspects of health literacy is the ability to search for and sort information using information technology. Baduy people are not used to looking for health information using their cell phones. However, according to customs, information technology is only limited to the Outer Baduy area: “ Technology can come in slowly. Cell phones are already in, but if they want to use them, they must go to Outer Baduy, ma’am. In Inner Baduy, they must go outside the village first. Then they are now free to meet their friends ”. (Dentist) So far, many Baduy already have the cellphone numbers of health workers. “ I have all [the midwives’ cellphone numbers]. I have the [cellphone] number of the health personnel and midwives in Cibaleger, in Kariki, I have all the [cellphone] numbers ”. (Baduy 1) However, health workers still tend to be passive. With access to contact with health workers, a WhatsApp group containing people from one village with specific healthcare workers can be formed. These health workers provide regular and contextual information to increase public health literacy. 4.3.3. Always Present at Least One Health Worker among Residents and Provide an Example of Healthy Living The visual presence of a health worker among community members emphasizes the importance of health aspects and creates a sense of security for the community. The presence of health workers in the community is something that the Baduy people themselves want: “ Indeed, in the past, I heard that people from Baduy Dalam, for example, were not allowed to use medical services because it has been like that for generations. Nevertheless, now we live in a wider society. So, people from Outer Baduyhave already used modern medicine, right? So sometimes what else can we do if we do not get help ”. (Punggawa) Community health workers are also vital to help the community overcome health problems. The following presentation from the SRI chairman shows the importance of the presence of health workers: “ My intention here is nothing different. I do not sell medicine. I do not sell these or those. I want to help the Baduy people because they do not know where to go. Mr. Jaro said, “Do not leave the Pustu [Subsidiary Health Center]”. No, I will not leave it. If the condition is like this, then that is the condition. So, in the end, that is it. Yesterday, Mr. Jaro’s wife had her feet scalded with hot water. I have taken it to the doctor, there are burns. Then, given ointment, she is healthy now. “It was Mr. Jaro who asked for treatment directly ”. (NGO) 4.3.4. Encouraging Collective Reflection When Health Cases Occur In extreme situations, it is essential to reflect so that the same incident does not happen again. Incidents in the form of deaths due to health problems can be raised in communications, as long as they remain sensitive to customary law and ethics, as learning material for the community to be more careful and maintain their health. For example, the incidence of breast cancer can be used as a lesson to avoid foods that are not nutritious and contain carcinogenic substances: “ That is why I say when I go around, “Do not eat cilok [chewy tapioca balls], do not eat noodles, better eat boiled bananas”. This local wisdom has now been lost. The children eat cilok every morning; there are cilok around. Snacks. That afternoon, a lady on a motorbike was picked up by her husband and shouted “cilok…cilok”. The children do not want to eat dinner anymore; why? Because the taste is different, there is already flavouring, and all kinds of things, and a generation of flavouring-addicted people has emerged ”. (NGO) The research team from the University of Indonesia also revealed an incident of delay in providing assistance, which resulted in the death of a mother. “ For example, there was a risky pregnancy, but it turns out that because of a customary problem, she was not allowed to access health services. Finally, after negotiating with the traditional authorities, she was allowed. However, 15 min before arriving at the hospital, the mother died. The mother died on the road. So she died in the ambulance ”. (Nurse) A wise and sensitive approach to customary law is needed well in advance to increase public health literacy so that the same incident does not happen again. 4.3.5. Balancing Gender Communication Baduy culture places the male gender as the gender that deals with governmental, administrative, and technical aspects. In contrast, the female gender still plays the traditional role of managing the household and family. Because health literacy does not look at gender, both genders must receive an education that is balanced and adapted to gender. This gender difference is also observed in various locations worldwide, such as Ghana and Hong Kong , where men generally have higher health literacy than women. An efficient gender-specific approach needs to be developed to improve the health literacy of the Baduy community . For example, health literacy related to nutrition can be directed at women because they culturally have an essential role in agriculture . In contrast, procedural and heuristic literacy can be directed at men.
The Baduy people’s health illiteracy is closely related to broader illiteracy due to their inability to read. The role of this general illiteracy is revealed in the narrative of the Baduy mother herself when she had to ask the midwife repeatedly about the writing on the family planning acceptor card: “ If I do not repeat it, I was afraid I will not understand the writing on the card, so I ask Teh Ira ”. (Baduy 2) SRI volunteers revealed that customs regarding the prohibition of eating certain foods were a limiting factor because some of the prohibited foods, such as beef and chicken, were quite nutritious. This prohibition discourages people from seeking health information related to nutrition and its role in growth and health: “ We use Baduy as the stunting pilot project site because this tribe emphasizes many prohibitions related to food, related to customs that must not be violated ”. (NGO) The strictness of this prohibition was also expressed by a source from the Lebak Social Service who described the strict prohibition regarding food so that people refused large amounts of additional food provided by the government: “ At that time, the Ministry of Social Affairs had food-giving activities in Baduy. We warned that Baduy people only need rice, salted fish, and shrimp paste. Nevertheless, they sent all other foods. Hence, many containers of food came just to be rejected by the Baduy at that time ”. (The Government) A solution to the food problem is to give eggs to the Baduy family. Eggs are a good protein source and are not prohibited by customary law. However, the condition of the families was so bad that the eggs, originally provided for the children, were consumed by their parents: “ We are ashamed. [The egg] should have been provided by their parents. We are also ashamed because the eggs are provided for stunted children, but their parents eat them ”. (Punggawa) From the time dimension, Baduy people have difficulty increasing literacy because they spend time in the fields, both the husband and the wife. Midwife Ira revealed that she could not provide education to the community because the community was not at home. They work in the fields for their daily food needs: “ Well, the target does not like anything like that, ma’am. For example, I want to provide additional food like that, giving eggs or holding a gathering for education and providing additional food like that. The target is in the huma [rice field], in the fields, so there are only 1 or 2. So, it is less effective here because of the people. They do not stay home every day, so they are also busy with their farming activities. Their livelihood is farming, right, ma’am? If they do not farm, they cannot eat because they earn money there; that is how it is ”. (Midwife) Midwife Ira revealed that, in general, people learn by imitating what their parents did before rather than finding out health information about whether an action is appropriate or not: “ And also looking at the elders’ previous [bad] habits, yes, they have been followed [uncritically by the community], so that is [a sign of] a lack of education, huh ”. (Midwife) Meanwhile, a professor of archaeology at the University of Indonesia who took part in community service activities for the Baduy community emphasized that access is one of the factors associated with the low health literacy of the Baduy community. The Inner Baduy community is difficult to access, so health-literacy programs cannot be implemented. At the same time, the Outer Baduy community is more easily accessible, so the community is also more literate about health: “ Well, what is interesting about our survey findings is that we divided three areas so there is Inner Baduy, and this is the area where access is difficult, so if we look for it, it is also problematic; there is also little knowledge about health and stunting. That is the characteristic of what is Inner Baduy. However, for Outer Baduy, communication is accessible because their knowledge regarding various kinds of programs has been widely accepted. On the other hand, because it is easy to access, many government, private, or community programs are carried out in the village. This village, so yes, is in the villages of Kaduketuk and then Cijahe and the surrounding areas, and the people are already used to receiving programs ”. (Archaeologist) Another external factor is the lack of consistency from external parties in increasing the literacy of the Baduy community. The programs provided are unsustainable, so they are less effective in increasing community literacy. The head of one neighbourhood ( punggawa ) said: “ Someone once told about stunting, but there was no further follow-up ”. (Punggawa) Finally, another interesting cultural factor is gender segregation. The researcher questions the understanding of the problem by asking if it is understood. Baduy 1 responds, indicating a lack of understanding and suggesting that men might understand it better: People usually come here from the health centre or even the health department. They are people who are far away. Did they give health information? (The Author) No, maybe men understand . (Baduy 1) Baduy 1 insists that health information, especially about stunting, should be conveyed to men for clarity: “ If you talk like this [health information about stunting], if you want it to be clear, you have to convey it to a man ”. (Baduy 1) When asked why health information should be communicated to men, Baduy 1 expresses uncertainty about the exact reason. Baduy 1 attributes this preference to traditional gender roles and practices, mentioning that men typically handle such matters and that community meetings are attended exclusively by men: “ Don’t know. That’s a man’s business. They held a meeting together. You should talk about this with a man. If there is a community meeting here, all the people who come are men ”. (Baduy 1) The dialogue reflects cultural norms and gender roles within the Baduy community, where men are seen as the primary recipients and conveyors of important information. It highlights a potential barrier to effective health communication, as women might not be seen as appropriate recipients of health-related information despite their critical role in child rearing and health maintenance.
The most pronounced impact of the low health literacy of the Baduy community is fatalism. Society views a health condition as destined, and it must be accepted without being corrected. For example, when asked why a mother’s child is stunted, the reason given is genetic rather than nutritional problems: “ His father’s [body] was small, so Sardin’s [body] was small ”. (Baduy 1) The fatalism of the Baduy people does not appear to be ideological, at least in terms of health, as revealed in the case of snake bites. The Baduy community understands that snakes are their greatest threat. Despite knowing the danger, they cannot avoid it because their work involves being in snake-prone areas. They feel resigned to their situation due to the lack of alternatives. So, there is a philosophical acceptance of death as something predestined. However, the introduction of medical intervention brings hope for improvement. “ So far, they have completely ignored the data regarding deaths due to snake bites, thinking that they have just given up. This belief [the folk legend that Baduy is immune to snake venom] is just a tactic from kokolot [community elders]. [he actual belief is that] when someone’s life is ended [by snake bites], it is predestined. There’s something [belief] like that. But when medical people come in and all that [health facilities], [they start to believe that] there are things that can be fixed ”. (NGO) Another impact is the high maternal and child mortality rate. This fact was revealed by a resource person from the University of Indonesia Hospital in a community service talk show: “ Well, yesterday, we looked at the data from the last few years. Yes, in the last two years and three years, Baduy had a high maternal mortality rate. Yes, up to 4 people per year. Well, this is quite extraordinary. Just one is quite high. Here, it is up to four every year. The child death figure is even higher. There are cases of child deaths, especially neonates. They died when they were born. Well, this case is also quite high. Between 9 and 14 children per year die in Baduy ”. (Nurse) Lack of health literacy also puts the Baduy community in a vicious circle. Low health literacy makes them less able to maintain their health and protect themselves from disease. This condition leads to health costs that they cannot afford. Ultimately, because of this inability, they become even more fatalistic, as shown in an interview with a Baduy’s neighbourhood leader: “ Sometimes there are things like this, ma’am. For example, the village used to be a tiny population. Even then, they said no matter how badly my child is sick, do not take him to the medical centre. Could you not take him to Rangkasbitung? It is expensive, so where do we get the money to pay the bill? That is the chatter of old people is like ”. (Punggawa)
4.3.1. Developing the Health Literacy of Community Leaders The Baduy community still depends on leaders like Jaro and other traditional authorities. So, if community leaders have sufficient health literacy, they can pass it on to other community members. So far, community leaders have encouraged public health, but more aspects of health actions have not yet reached the health-literacy stage: “ The point is that Jaro and Puun also urge the Baduy people to consume nutritious food, but not force it because there is a clash of customs [such as eating healthy but prohibited food] that cannot be forced ”. (Midwife) The potential for Jaro to become a community literacy agent in the health sector is enormous because the community attends monthly meetings to discuss all matters related to customary law. In this event, recommendations for health literacy can be included: “ We were invited to a community gathering every month. The purpose of the meeting is to provide instructions to the village or the parents; everything is talked about and entrusted to the parents. Only recently has this happened. Violations of various kinds are reported, and then there are instructions, such as if you [want to] eat [certain] food [from outside], you must be careful [that the food contains prohibited or unhealthy items]. It was recently held with the community. Every month, the meeting is held ”. (Baduy 1) 4.3.2. Managing Information-Technology-Based Health-Information Groups Even though the Baduy people live traditionally, they still have cell phones and use them mainly for business purposes . One of the central aspects of health literacy is the ability to search for and sort information using information technology. Baduy people are not used to looking for health information using their cell phones. However, according to customs, information technology is only limited to the Outer Baduy area: “ Technology can come in slowly. Cell phones are already in, but if they want to use them, they must go to Outer Baduy, ma’am. In Inner Baduy, they must go outside the village first. Then they are now free to meet their friends ”. (Dentist) So far, many Baduy already have the cellphone numbers of health workers. “ I have all [the midwives’ cellphone numbers]. I have the [cellphone] number of the health personnel and midwives in Cibaleger, in Kariki, I have all the [cellphone] numbers ”. (Baduy 1) However, health workers still tend to be passive. With access to contact with health workers, a WhatsApp group containing people from one village with specific healthcare workers can be formed. These health workers provide regular and contextual information to increase public health literacy. 4.3.3. Always Present at Least One Health Worker among Residents and Provide an Example of Healthy Living The visual presence of a health worker among community members emphasizes the importance of health aspects and creates a sense of security for the community. The presence of health workers in the community is something that the Baduy people themselves want: “ Indeed, in the past, I heard that people from Baduy Dalam, for example, were not allowed to use medical services because it has been like that for generations. Nevertheless, now we live in a wider society. So, people from Outer Baduyhave already used modern medicine, right? So sometimes what else can we do if we do not get help ”. (Punggawa) Community health workers are also vital to help the community overcome health problems. The following presentation from the SRI chairman shows the importance of the presence of health workers: “ My intention here is nothing different. I do not sell medicine. I do not sell these or those. I want to help the Baduy people because they do not know where to go. Mr. Jaro said, “Do not leave the Pustu [Subsidiary Health Center]”. No, I will not leave it. If the condition is like this, then that is the condition. So, in the end, that is it. Yesterday, Mr. Jaro’s wife had her feet scalded with hot water. I have taken it to the doctor, there are burns. Then, given ointment, she is healthy now. “It was Mr. Jaro who asked for treatment directly ”. (NGO) 4.3.4. Encouraging Collective Reflection When Health Cases Occur In extreme situations, it is essential to reflect so that the same incident does not happen again. Incidents in the form of deaths due to health problems can be raised in communications, as long as they remain sensitive to customary law and ethics, as learning material for the community to be more careful and maintain their health. For example, the incidence of breast cancer can be used as a lesson to avoid foods that are not nutritious and contain carcinogenic substances: “ That is why I say when I go around, “Do not eat cilok [chewy tapioca balls], do not eat noodles, better eat boiled bananas”. This local wisdom has now been lost. The children eat cilok every morning; there are cilok around. Snacks. That afternoon, a lady on a motorbike was picked up by her husband and shouted “cilok…cilok”. The children do not want to eat dinner anymore; why? Because the taste is different, there is already flavouring, and all kinds of things, and a generation of flavouring-addicted people has emerged ”. (NGO) The research team from the University of Indonesia also revealed an incident of delay in providing assistance, which resulted in the death of a mother. “ For example, there was a risky pregnancy, but it turns out that because of a customary problem, she was not allowed to access health services. Finally, after negotiating with the traditional authorities, she was allowed. However, 15 min before arriving at the hospital, the mother died. The mother died on the road. So she died in the ambulance ”. (Nurse) A wise and sensitive approach to customary law is needed well in advance to increase public health literacy so that the same incident does not happen again. 4.3.5. Balancing Gender Communication Baduy culture places the male gender as the gender that deals with governmental, administrative, and technical aspects. In contrast, the female gender still plays the traditional role of managing the household and family. Because health literacy does not look at gender, both genders must receive an education that is balanced and adapted to gender. This gender difference is also observed in various locations worldwide, such as Ghana and Hong Kong , where men generally have higher health literacy than women. An efficient gender-specific approach needs to be developed to improve the health literacy of the Baduy community . For example, health literacy related to nutrition can be directed at women because they culturally have an essential role in agriculture . In contrast, procedural and heuristic literacy can be directed at men.
The Baduy community still depends on leaders like Jaro and other traditional authorities. So, if community leaders have sufficient health literacy, they can pass it on to other community members. So far, community leaders have encouraged public health, but more aspects of health actions have not yet reached the health-literacy stage: “ The point is that Jaro and Puun also urge the Baduy people to consume nutritious food, but not force it because there is a clash of customs [such as eating healthy but prohibited food] that cannot be forced ”. (Midwife) The potential for Jaro to become a community literacy agent in the health sector is enormous because the community attends monthly meetings to discuss all matters related to customary law. In this event, recommendations for health literacy can be included: “ We were invited to a community gathering every month. The purpose of the meeting is to provide instructions to the village or the parents; everything is talked about and entrusted to the parents. Only recently has this happened. Violations of various kinds are reported, and then there are instructions, such as if you [want to] eat [certain] food [from outside], you must be careful [that the food contains prohibited or unhealthy items]. It was recently held with the community. Every month, the meeting is held ”. (Baduy 1)
Even though the Baduy people live traditionally, they still have cell phones and use them mainly for business purposes . One of the central aspects of health literacy is the ability to search for and sort information using information technology. Baduy people are not used to looking for health information using their cell phones. However, according to customs, information technology is only limited to the Outer Baduy area: “ Technology can come in slowly. Cell phones are already in, but if they want to use them, they must go to Outer Baduy, ma’am. In Inner Baduy, they must go outside the village first. Then they are now free to meet their friends ”. (Dentist) So far, many Baduy already have the cellphone numbers of health workers. “ I have all [the midwives’ cellphone numbers]. I have the [cellphone] number of the health personnel and midwives in Cibaleger, in Kariki, I have all the [cellphone] numbers ”. (Baduy 1) However, health workers still tend to be passive. With access to contact with health workers, a WhatsApp group containing people from one village with specific healthcare workers can be formed. These health workers provide regular and contextual information to increase public health literacy.
The visual presence of a health worker among community members emphasizes the importance of health aspects and creates a sense of security for the community. The presence of health workers in the community is something that the Baduy people themselves want: “ Indeed, in the past, I heard that people from Baduy Dalam, for example, were not allowed to use medical services because it has been like that for generations. Nevertheless, now we live in a wider society. So, people from Outer Baduyhave already used modern medicine, right? So sometimes what else can we do if we do not get help ”. (Punggawa) Community health workers are also vital to help the community overcome health problems. The following presentation from the SRI chairman shows the importance of the presence of health workers: “ My intention here is nothing different. I do not sell medicine. I do not sell these or those. I want to help the Baduy people because they do not know where to go. Mr. Jaro said, “Do not leave the Pustu [Subsidiary Health Center]”. No, I will not leave it. If the condition is like this, then that is the condition. So, in the end, that is it. Yesterday, Mr. Jaro’s wife had her feet scalded with hot water. I have taken it to the doctor, there are burns. Then, given ointment, she is healthy now. “It was Mr. Jaro who asked for treatment directly ”. (NGO)
In extreme situations, it is essential to reflect so that the same incident does not happen again. Incidents in the form of deaths due to health problems can be raised in communications, as long as they remain sensitive to customary law and ethics, as learning material for the community to be more careful and maintain their health. For example, the incidence of breast cancer can be used as a lesson to avoid foods that are not nutritious and contain carcinogenic substances: “ That is why I say when I go around, “Do not eat cilok [chewy tapioca balls], do not eat noodles, better eat boiled bananas”. This local wisdom has now been lost. The children eat cilok every morning; there are cilok around. Snacks. That afternoon, a lady on a motorbike was picked up by her husband and shouted “cilok…cilok”. The children do not want to eat dinner anymore; why? Because the taste is different, there is already flavouring, and all kinds of things, and a generation of flavouring-addicted people has emerged ”. (NGO) The research team from the University of Indonesia also revealed an incident of delay in providing assistance, which resulted in the death of a mother. “ For example, there was a risky pregnancy, but it turns out that because of a customary problem, she was not allowed to access health services. Finally, after negotiating with the traditional authorities, she was allowed. However, 15 min before arriving at the hospital, the mother died. The mother died on the road. So she died in the ambulance ”. (Nurse) A wise and sensitive approach to customary law is needed well in advance to increase public health literacy so that the same incident does not happen again.
Baduy culture places the male gender as the gender that deals with governmental, administrative, and technical aspects. In contrast, the female gender still plays the traditional role of managing the household and family. Because health literacy does not look at gender, both genders must receive an education that is balanced and adapted to gender. This gender difference is also observed in various locations worldwide, such as Ghana and Hong Kong , where men generally have higher health literacy than women. An efficient gender-specific approach needs to be developed to improve the health literacy of the Baduy community . For example, health literacy related to nutrition can be directed at women because they culturally have an essential role in agriculture . In contrast, procedural and heuristic literacy can be directed at men.
The research results reveal that several factors contribute to the low health literacy of the Baduy community. These factors include low general literacy (reading and writing), customs that impose prohibitions on eating certain foods, spending extensive time in the fields, learning through imitation of parental behaviour, and difficulties in physical access. Additionally, from outside the village, inconsistencies in health-campaign programs and gender segregation further exacerbate the issue. These findings align with the existing literature that highlights the challenges disadvantaged community groups face in improving health literacy . For example, Oredola et al. identify that Indigenous communities worldwide often transmit knowledge through deliberate instructions from parents to children, with children learning by imitating their parents’ behaviour, making parents key to health promotion among their children. In Indigenous Australian adults, geographical remoteness has been identified as a significant barrier to health literacy . However, this research presents some differences compared to studies conducted on Aboriginal communities in Australia. For instance, Nash and Arora reviewed studies that did not mention eating taboos as a constraint. Consequently, many interventions aimed at Aboriginal communities focus on nutritional aspects, such as lowering food prices and increasing food availability. In contrast, the situation is more complex for the Baduy community, which adheres to several food taboos. Unlike other Indigenous communities, which typically have food taboos only during specific occasions such as pregnancy and postpartum , the Baduy community restricts their food choices at all times and under all conditions. This practice aligns with their philosophy of living a simple life, which also aligns with their low socio-economic status and benefits them economically. The existing obstacles are deeply connected to the low socio-economic status of the Baduy community. Extreme poverty, in addition to strict cultural practices, plays a role in hindering the development of health literacy. For instance, the taboos on eating certain foods and the extensive time spent working in the fields reflect the community’s low-income levels, compelling them to forgo high-protein foodstuffs and focus on increasing productivity through intense labour. Interviews with community members also revealed several impacts of low health literacy, such as fatalism, maternal and child deaths, and issues related to health costs. These problems could be mitigated if the Baduy people had higher health literacy. High health literacy empowers individuals to seek health assistance, anticipate factors that increase the risk of maternal and child mortality, and take preventive steps to manage health costs effectively. Several strategies for increasing health literacy have been proposed to address these challenges. These include developing the health literacy of community leaders, managing information-technology-based health-information groups, ensuring the presence of at least one health worker among residents, encouraging joint reflection when health cases occur, and balancing gender communication. These strategies align with the previous literature on improving health literacy, which emphasizes context-sensitive interventions that address significant socio-economic and emotional challenges . The proposed interventions focus on developing the health literacy of community leaders, managing health information through technology, ensuring consistent health worker presence, fostering community reflection on health issues, and promoting balanced gender communication. Limitations We checked the data and analysis results again to maintain confirmability. The transferability of this research into another culturally shocked Indigenous population might be possible, given the similarity in their contextual factors such as history and geography. The credibility of this research was maintained because the participants also read the transcript. However, the limited number of interviews and participants in this research indicates a constraint of the study. Although this is common in case-study research, the small sample size prevents broader generalizations. Future studies should aim to include a more significant number of participants to enhance the robustness of their findings. It is also important to note that the first author, who conducted the interviews, was not fluent in Baduy Sundanese, the language spoken by the Baduy community. Consequently, a midwife from the Dompet Dhuafa NGO, who participated in this research, was chosen as the translator. This decision may have introduced bias into the data-collection process. The researcher’s role in this research is participative, in the sense that we participate in helping the community reach more health. We joined the village midwife to tour the community and assist in any health intervention she did for the community members during the research. The involvement is the strength of this research, in the sense that we could experience the problems that the community faced. However, this also introduces bias, especially for theoretical saturation. We reached saturation after mothers from the community kept providing the same information, especially regarding male–female health literacy. If we asked more community leaders, the saturation point might be further, and the data might be richer. Future research could examine the gender dynamics within the Baduy community and how they impact health communication and literacy. Research should look into effective ways to address gender-specific health needs and communication barriers. Another venue for future studies is the study on the feasibility and impact of using information technology, such as mobile apps, WhatsApp groups, and YouTube channels, for health education. This research should explore the community’s access to and engagement with these technologies. Another study might explore the effectiveness of mobile health clinics for providing health services and education to the Baduy community and explore various health-communication strategies to identify the most effective methods for reaching and educating the Baduy community.
We checked the data and analysis results again to maintain confirmability. The transferability of this research into another culturally shocked Indigenous population might be possible, given the similarity in their contextual factors such as history and geography. The credibility of this research was maintained because the participants also read the transcript. However, the limited number of interviews and participants in this research indicates a constraint of the study. Although this is common in case-study research, the small sample size prevents broader generalizations. Future studies should aim to include a more significant number of participants to enhance the robustness of their findings. It is also important to note that the first author, who conducted the interviews, was not fluent in Baduy Sundanese, the language spoken by the Baduy community. Consequently, a midwife from the Dompet Dhuafa NGO, who participated in this research, was chosen as the translator. This decision may have introduced bias into the data-collection process. The researcher’s role in this research is participative, in the sense that we participate in helping the community reach more health. We joined the village midwife to tour the community and assist in any health intervention she did for the community members during the research. The involvement is the strength of this research, in the sense that we could experience the problems that the community faced. However, this also introduces bias, especially for theoretical saturation. We reached saturation after mothers from the community kept providing the same information, especially regarding male–female health literacy. If we asked more community leaders, the saturation point might be further, and the data might be richer. Future research could examine the gender dynamics within the Baduy community and how they impact health communication and literacy. Research should look into effective ways to address gender-specific health needs and communication barriers. Another venue for future studies is the study on the feasibility and impact of using information technology, such as mobile apps, WhatsApp groups, and YouTube channels, for health education. This research should explore the community’s access to and engagement with these technologies. Another study might explore the effectiveness of mobile health clinics for providing health services and education to the Baduy community and explore various health-communication strategies to identify the most effective methods for reaching and educating the Baduy community.
For a long time, the Baduy people in Indonesia have lived by adhering to their customs and maintaining strict isolation from the outside world. However, as access to the outside world increases and technological advances develop, many health problems in Baduy society must be addressed immediately. We, as outsiders, recognise the influence and power of customary law in determining the lives of the Baduy people. We also realise that the Baduy people have the right to health and should have the broadest possible access to health services, coupled with increased health literacy, which empowers them to maximise available health facilities. Addressing this issue involves many stakeholders, and their alignment is complex and requires precise and intense communication. In this research, six sources discussed the factors, impacts, and ways to overcome low health literacy in the Baduy community. This research found that general illiteracy in the Baduy community, dietary restrictions, livelihoods, learning methods, and gender segregation play an essential role in the low health literacy of the Baduy community. Difficult access to villages and the lack of consistency of external parties in providing health programs also play a role in health literacy. Mentally, this has the impact of fatalism in the society. Physically, health literacy affects the high maternal and child mortality rates and high health costs. The strategy proposed to increase the health literacy of the Baduy community based on the findings of this research is to encourage collective reflection when extreme health cases occur in the community. An emphasis on developing health literacy that targets community leaders is also crucial because the Baduy community generally respects community leaders. Furthermore, balancing gender communication is a solution that suits the existing situation of gender segregation. The fact that people have low mobility because they are always in the village and are not allowed to use vehicles demands the presence of at least one health worker among the residents who provides an example of healthy living through direct communication. Managing information-technology-based health-information groups is essential, and this research suggests establishing WhatsApp groups and YouTube channels that socialise and teach health literacy to the Baduy community. The findings indicate that several factors, including general illiteracy, cultural practices, and time constraints, drive low health literacy in the Baduy community. Consistent and long-term health programs are crucial for gradually overcoming these deep-rooted issues. Short-term interventions are unlikely to produce lasting change because they do not allow enough time to build trust, adapt to cultural nuances, or make significant improvements in literacy and health knowledge. Furthermore, the impacts of low health literacy, such as fatalism, high maternal and child mortality, and a vicious cycle of poor health maintenance, are severe and complex. These issues cannot be resolved with temporary solutions; they require ongoing education, support, and adaptation of health programs to the community’s evolving needs. Long-term partnerships can help build a resilient health infrastructure that continuously addresses these challenges. Hence, as a policy recommendation, we advise that external health programs are consistent and sustained. The government must avoid short-term interventions and instead focus on long-term partnerships with NGOs, government agencies, and other stakeholders to provide ongoing support and resources for the Baduy community. The government needs to use the partnership to implement regular health assessments to monitor the health status of the Baduy community and identify emerging health issues. After that, the partnership should use these data to adapt and improve health programs continually. In line with this strategy, the partnership should train and employ local Baduy community members as health workers. These individuals can liaise between the health system and the community, providing education and basic health services while respecting cultural norms. Meanwhile, the partnership can design health programs that address gender segregation by providing separate but equal health-education sessions for men and women. Ensure that both genders receive the same quality and quantity of health information.
|
Efficacy and Accuracy of Maxillary Arch Expansion with Clear Aligner Treatment | 0f94e149-f825-4521-a49a-dfa50e2d22c4 | 10002100 | Dental[mh] | The term “clear align therapy (CAT)” refers to the orthodontic technique with clear aligners for the treatment of dental malocclusions [ , , ]. Since its development in 1997, Invisalign ® technology has been established worldwide as an aesthetic alternative to labial fixed appliances . Since its first appearance on the market, the Invisalign ® system has seen significant development over time; many of its features have been continuously improved. New and different attachment designs have been developed, and the manufacturing material has been tested and improved. To allow for additional treatment biomechanics, the combined use of the clear aligner treatment with computer-guided piezocision and new auxiliaries, such as “precision cuts” and “Power Ridges”, has been proposed and used. According to the manufacturer, Invisalign ® is capable of effectively performing dental movements, such as bicuspid derotation, up to 50° and root movements of maxillary central incisors up to 4 mm. Despite the defended efficacy of the treatment, there is still controversy among professionals about the real clinical potency. On the one hand, the defenders are convinced and show cases of successful treatment, providing clinical evidence. In contrast, the opponents argue that there are significant limitations, especially when it comes to the treatment of cases with complex malocclusions [ , , , ]. Rossini et al., in their systematic literature review, found that the clear aligner treatment aligns and levels the arches and is effective in controlling anterior intrusion but not anterior extrusion. It is effective in controlling posterior buccolingual inclination but not anterior buccolingual inclination, and it is effective in controlling upper molar bodily movements of about 1.5 mm but is not effective in controlling the rotation of rounded teeth, in particular . Aligners are now commonly used, such as in fixed appliance therapy, for the treatment of malocclusions of all types and severity, particularly for transverse dento-alveolar problems requiring the expansion of one or both arches . In the evaluation of occlusion in the transverse plane, it is considered correct when the palatal cusp of the maxillary posterior teeth occludes with the central fossa of the mandibular posterior teeth . If the upper buccal cusp occludes with the central fossa of the posterior lower teeth, a malocclusion occurs, which is called a crossbite . This type of malocclusion may be of skeletal origin, whereby the dento-alveolar processes are correctly positioned in relation to the bony base, but the base presents maxillary skeletal hypoplasia or mandibular skeletal hyperplasia (or both) . When the malocclusion is skeletal, its early correction is recommended through maxillary expansion with an orthopedic appliance, which guarantees greater stability over time . When the malocclusion is of dental origin, the bone base has a correct transverse proportion, but dento-alveolar processes are altered [ , , ]. It has been observed that one in three patients presents with a posterior crossbite of at least one tooth . Arch expansion can be used to resolve crowding, correct dento-alveolar crossbite, or modify the arch shape . Single-tooth crossbite is an easy case to treat with clear aligners; the aligners function as bite-planes that eliminate occlusal interferences and help to correct the crossbite. The crossbite of multiple teeth can be more complicated . The aligners expand mainly by changing the torque of the posterior teeth through a crown buccal movement. The expansion can be performed at the canine, molar, and premolar level, or differentiated by maintaining a stable sector . Several authors in their studies observed that treatment with the Invisalign ® system achieves a significant increase in the transverse measurements of the width of the arch as well as the perimeter of the arch [ , , ]. Current knowledge on invisible aligners allows us to have a much clearer idea of the basic characteristics of aligner systems, but there remains a need to increase the use of systems other than Invisalign ® to provide greater evidence for different aligners that are widespread on the market . The predictability of posterior expansion through treatment with aligners has been compared to the efficacy of the multibracket technique, and treatment with self-ligating multibrackets has been shown to be effective in solving mild crowding by increasing the width of the arch and correcting buccolingual tilt, occlusal contacts, and root angulations. While the Invisalign ® treatment aligns the arches by derotating the teeth and leveling the arches, due to the lack of control of tooth movement, Invisalign ® can easily tip crowns and be less effective in correcting transverse problems . There is precedent in the literature for the effectiveness of Invisalign ® clear aligners (Align Technology, Santa Clara, CA, USA) and the predictability of its software (Align Technology, Santa Clara, CA, USA) for the planning of treatment with arch expansion. Some authors have evaluated how effective clear aligners are in achieving the proposed treatment objectives ; others have compared the results of treatment with clear aligners with those obtained with therapies using fixed appliances. Most of these investigations were carried out with the previous EX30 system, which was recently replaced by SmartTrack (Align Technology, Santa Clara, CA, USA), so it is necessary to evaluate the characteristics of the updated system. Posterior expansion of up to 2 mm per quadrant is a predictable movement achievable with aligners and decreases with increasing planned expansion . It is advised, in case of crossbite, to overcorrect the expansion in the Clincheck ® programming until the palatal cusps of the upper molars contact the buccal cusps of the mandibular molars . Beyond 2 mm of expansion, cross elastics or other auxiliaries may be necessary to achieve the planned result . The predictability of maxillary expansion with clear aligners has shown wide variability over time. Several studies that have evaluated the expansion of dental arches suggest that to minimize the risk of gingival recurrence and recession, the expansion limit of the arch width should be a maximum of 2–3 mm per quadrant. Invisalign ® may be indicated to achieve expansion in cases with crowding of 1 to 5 mm and in cases that require expansion to achieve space to include blocked out teeth. The expansion of the arch with Invisalign ® can result in an aesthetic advantage for the patient because, by widening the dental arches, it allows for improved aesthetics of the smile by reducing the buccal corridors [ , , , ]. Considering this variability in the results obtained from studies in the literature concerning the predictability of maxillary expansion with clear aligners, the aim of this study is to evaluate the efficacy and the accuracy of maxillary arch transverse expansion using the Invisalign ® clear aligner system without auxiliaries other than Invisalign ® attachments.
This prospective study was approved by the Ethical Committee of Sapienza University of Rome n° 1621/15 r. 3364, and the patients and/or their parents signed the informed consent for participation in the study. The patients were selected from a group of 140 subjects recruited in the UOC of Orthodontics of the Department of Odontostomatological and Maxillo-Facial Science of “Sapienza” University of Rome. A total of twenty-eight patients were included in the study. The patients were selected according to the following inclusion criteria: patients of both sexes, aged between 13 and 25 years old with complete permanent teeth, treatments performed with Invisalign ® aligners made from Smart-Track ® material, treatments that required transverse dento-alveolar expansion (2–4 mm) to correct malocclusion, patients with sufficient clinical crown height (greater than 4 mm), and patients who followed the treatment with good compliance. The exclusion criteria considered in the study were as follows: patients affected by systemic diseases and orofacial syndromes, patients with missing teeth in the posterior sectors, need for extractive therapy, presence of agenesis (excluding the third molar), excessive dental erosion at the cusp level such that the apex of the dental cusps cannot be found and multiple and/or advanced caries, patients with conoid teeth, patients with periodontal diseases, need for auxiliaries to correct transversal problems (TADs, REP, criss-cross elastics), patients with implants, prosthodontic rehabilitation or ankylosed teeth, and patients requiring orthognathic surgery. All the patients were treated with the Invisalign ® technique by a single Invisalign provider. The treatment protocol for all the selected patients included the application of the Invisalign ® clear aligner system without auxiliaries except for the Invisalign ® attachments. In no cases were tooth extraction or interproximal enamel reduction (IPR) performed. Upper arch expansion was planned to correct crowding and transverse discrepancy. The patients were instructed on how to use the aligners: they should wear it all day, except during meals and dental hygiene, and all night; the change time between aligners was 7 days. The fit of the aligner and the presence of all attachments was checked by the provider every four stages. It was explained to all the patients that they were part of a research protocol and they or their parents accepted their participation by signing the informed consent; the patient’s collaboration was recorded in the clinical record. For each patient, an intraoral scan of the pretreatment dental arches (T0) and a scan at the end of treatment (T1) were performed with the Itero Flex ® scanner. The final position of the corresponding ClinCheck ® representation (TC) was also collected to establish the accuracy of the final virtual model with respect to the movements observed in the post-treatment model. Three models were then collected for each patient according to the following timetable: Pretreatment STL model (T0) obtained by scanning the maxillary arch before starting Invisalign ® treatment. Post-treatment STL model (T1) obtained from scanning the maxillary arch at the end of the treatment with Invisalign ® . STL model from the final model programmed on the ClinCheck ® software (TC). All models of the maxillary arches were opened with the program ExoCad ® (DentalCad). Using the program’s own measuring tool, linear millimeter measurements were taken. All measurements were performed by a trained single operator. The following transverse linear measurements were carried out on the upper arch for each T0 and T1 model and for the ClinCheck ® model (TC) ( ): Intercanine cusp width: linear distance in millimeters between the cusp of the maxillary canine of one hemiarch to the cusp of the maxillary canine of the contralateral hemiarch (A). Intercanine gingival width: linear distance in millimeters between the most apical point of the palatal surface of the canine’s crown of the maxillary canine of one hemiarch to the same point of the contralateral hemiarch (B). First inter-premolar width: linear distance in millimeters between the buccal cusp of the first premolar of one hemiarch to the buccal cusp of the contralateral first premolar (C). Second inter-premolar width: linear distance in millimeters between the buccal cusp of the second premolar of one hemiarch to the buccal cusp of the contralateral first premolar (D). First molar mesio-vestibular cusp width: linear distance in millimeters between the mesiobuccal cusp of the first molar of one hemiarch to the mesiobuccal cusp of the contralateral first molar (E). First molar gingival width: linear distance in millimeters between the most apical point of the palatal surface of the first molar’s crown of one hemiarch to the same point of the contralateral hemiarch (F). In addition, the following measurements were performed: Expansion obtained was calculated by the difference between the post-treatment distance with respect to the pretreatment amplitude (T1-T0). Planned expansion was calculated by the difference between the planned distance on the Clincheck ® with respect to the pretreatment amplitude (TC-T0). Accuracy of expansion was calculated by the difference between planned expansion on the Clincheck with respect to the obtained expansion (TC-T1). Clinical accuracy (%) was achieved for all measurements, using the equation [(expansion obtained/planned expansion) × 100]. To estimate the size of the sample population for this study, a preliminary investigation was carried out to determine the power of the study (PS) and to establish the effect size (ES) (0,58) of the sampled population for the experimental study. Twenty-six patients were needed to estimate the expansion movement with a 95% confidence interval (CI), a power of 80%, and a level of significance of 5% for detecting an effect size of 0.58. Intra-examiner reliability was evaluated; the same examiner performed the measurements on 10 patients and repeated them two weeks later. The reliability of all measurements was assessed using an interclass correlation coefficient (ICC). Numerical variables were expressed as mean and standard deviation values. Descriptive statistical analysis was performed for all measurements separately to compare the T0-T1 changes and the T0-TC differences. The normality of the measurements was assessed using the Shapiro–Wilks test. To compare the means between groups, a Student’s t -test was performed for independent data once normality was validated. If normality was not met, the nonparametric test (Mann–Whitney U test) was applied. The significance level applied in the analysis was 5% (α = 0.05). SPSS software (IBM Corp, Chicago, IL, USA) version 26 was used to analyze the data.
The results obtained displayed a high degree of intra-observer reliability with an intraclass correlation coefficient > 0.80 for all linear measurements. Twenty-eight patients (15 males, 18 females), with a mean age of 17 ± 3.2 years old were evaluated. The shows the descriptive statistics of all the measurements performed pretreatment (T0), post-treatment (T1), and in the Clincheck ® model (TC). The planned expansion (TC-T0), the expansion obtained (T1-T0), the difference between expansion obtained and planned expansion, and the clinical accuracy are described in . The planned expansion (mm) increased progressively from anterior to posterior at the level of the cusps, i.e., the planned intercanine width was on average smaller than the planned width of the first premolar, and the planned width of the first premolar was on average smaller than the planned width of the first molar. Furthermore, the planned expansions in millimeters for intercanine and intermolar gingival width were less than those for the cusp width. On average, an expansion of between 5% and 7% more than the initial width (between 1.6 mm and 3.5 mm) was planned. The maximum expansion was planned at the level of the first inter-premolar width (7.35%, 2.95 mm) and the minimum at the intercanine cusp width (4.86%, 1.6 mm). On average, an expansion of between 3% and 7% more than the initial width was obtained. The maximum expansion was obtained at the first inter-premolar width level (6.87%, 2.7 mm) and the minimum at the first intermolar gingival width level (2.92%, 0.98 mm). The percentage of expansion obtained was less than the percentage of expansion planned in all measures. The T1-TC difference was less than 1 mm, except for the width of the intermolar buccal cusp that reaches it. The greatest differences between T1 and TC occurred at the level of the intermolar buccal cusp width (1.05 mm) and at the level of the gingival width (intercanine gingival width 0.98 mm and first intermolar gingival width 0.78 mm). However, in the intercanine, inter-premolar, and intermolar measurements at the level of the cusps, the differences between the expansion obtained and the planned expansion were not statistically significant, while they were statistically significant for gingival measurements (intercanine gingival width, intermolar gingival width). This result suggests that there is more vestibular tipping movement than body movement of the crowns at the level of the canine and of first molars. The global clinical accuracy of the expansion treatment was 70.88%. The accuracy of the gingival measurements was low, around 50%, while for the measurements of the cusp width, the accuracy was between 70% and 82%. In the intercusp measurements, the expansion was more accurate for the first premolar (93.53%) and less for the first molar (70.55%).
This study evaluated the possibility of effective transversal expansion of the upper arch through Invisalign ® treatment without the use of auxiliaries other than Invisalign ® attachments and the difference at different levels. In addition, the accuracy of the virtual pretreatment model developed with ClinCheck ® was evaluated in relation to the results obtained on from transversal expansion of the maxillary arch. Monitoring tooth movement in orthodontics is important to assess the ability of devices to achieve movement and establish protocols capable of achieving orthodontic treatment goals [ , , , ]. New technologies facilitate the evaluation of dental movement and allow for more precise measurements [ , , , ]. In this way, it was possible to evaluate the possibility of expansion with Invisalign ® . The results show that it is possible to expand to a higher percentage at the intercuspid level of the molar area and less at the canine intercuspid level. These results are in line with Morales-Burruezo et al. who analyzed transverse expansion using Invisalign SmartTrack and concluded that expansion is achievable when it is alveolar, with higher efficiency at the premolar level and lower at the canine level. However, Clemens et al. , who evaluated using the Peer Assessment Rating index (PAR index) in 51 patients treated with aligners, observed that of the 25 patients who required transverse augmentation, 79% did so, resulting in 17% remaining stable and 4% worsening. To assess the accuracy of expansion, an effectiveness index was considered, i.e., the closer the expansion obtained was to that predicted by the ClinCheck ® . Effectiveness was considered to be 100% if the expansion obtained was statistically equal to that predicted. The results of this study showed an average accuracy of effectiveness of 70%. The differences in accuracy between the different measures (intercanine cusp and gingival width, first inter-premolar width and first intermolar cusp and gingival width) were not statistically significant; therefore, the overall accuracy of the expansion treatment was 70%, regardless of tooth type. The present study showed that the effectiveness is lower when measured at the palatal side of the tooth, in agreement with Houle et al. , who claimed that body movement is not possible but instead a coronal inclination of the tooth. Furthermore, they state that the accuracy of digital programming with aligners is 72.8% in the maxillary arch, in accordance with our results. In our study, the effectiveness was on average 55% at the intermolar gingival level, while at the canine gingival level, it was 43%, and these results suggest, as reported in other studies [ , , ], that there is less movement of the root portion of the tooth compared to the cusp portion, at least at the canine and molar levels. It would therefore appear that, although a body movement is programmed in the ClinCheck ® , what is obtained is mainly a tipping coronal movement of the tooth. Kraviz et al. analyzed the predictability of Invisalign treatment with G3 material by superimposing initial and final models and showed that transverse expansion is not very accurate, with a predictability of 40.7%. The authors state that any type of movement has a predictability of 41%. However, it should be noted that the authors analyzed the effectiveness of the expansion with aligners made of G3 material, while the present study analyzed the results with the use of the new SmartTrack ® material. This difference could explain a better performance of the new material to which the expansive force is applied. Similar studies were performed by Lione et al. on the analysis of dental expansion movements in digital dental models. In agreement with the present study, they obtained a greater expansion at the level of the upper first molars with respect to other teeth. In their study, linear and angular measurements were performed before treatment (T0), at the end of treatment (T1), and on final virtual models (ClinCheck ® models), and significant differences were obtained for both linear and angular measurements for maxillary canines, resulting in little predictability . In another study, Lione et al. evaluated maxillary expansion with the Invisalign First System ® in growing subjects. Twenty-three patients with a mean age of 9.4 ± 1.2 years old, with a maxillary posterior transverse interarch discrepancy, were included in the study. The discrepancy was obtained by calculating the difference between the maxillary intermolar width, measured between the central fossae of the maxillary first molars on each side, and the mandibular intermolar width, measured between the mesiobuccal cusps of the mandibular first molars on each side. Patients were treated without extraction with Invisalign First System ® clear aligners with no auxiliaries other than Invisalign ® attachments, and no interproximal enamel reduction (IPR) was planned during treatment, as in our protocol. The results of their study showed a significant increase in the greatest width in the first primary molars compared to the second primary molars and primary canines. Maxillary first molars also showed the greatest expansion in mesial intermolar width due to rotation that occurred during expansion around the palatal root of the hinge tooth. These results are consistent with ours in that the greatest expansion was obtained in the most posterior sectors and at the occlusal level; however, in our study we did not consider both cusps of the molar, so it was not possible to assess whether rotation was present. This study has some limitations; for example, the amount of crowding that could influence the effectiveness of the expansion treatment was not considered, and the patients were not classified according to the amount of expansion needed considering the crowding. For future research, it would be advisable to increase the size of the sample, considering different groups of malocclusions and include a control group with another type of appliance useful for dento-alveolar expansion. In addition, other measures could be included to evaluate the vestibular inclination of the teeth and the rotation as a treatment effect to confirm the promising results of the present study.
Experience has shown us that certain movements cannot be achieved with aligners, but the actual limitations are unclear. Previsualization of the result can often be misleading for clinicians and patients. In conclusion, the efficacy in maxillary arch transverse expansion, on average, is rated at 70%, and is not related to the type of tooth considered but applies overall. Effectiveness is lower at the lingual level, with an average of 55% at the intermolar level, and 46% at the canine level. Statistically significant differences were found between the efficacy at the cuspal level compared to the efficacy measured at the most apical point of the palatal surface of the tooth, indicating that there is more tipping movement than body movement. The ClinCheck programs a body movement, whereas what we have obtained is a tipping movement.
|
Tracking fungal species-level responses in soil environments exposed to long-term warming and associated drying | 92f3f3b3-8c5b-461c-be85-4abe58161bfb | 10748604 | Microbiology[mh] | Climate change is affecting soil microbial communities, and thus, the cycling of carbon (Melillo et al. , Allison and Treseder , Cavicchioli et al. , IPCC ). Because soil microbes mediate biogeochemical cycles, their responses to global change drivers, such as warming and warming-induced drying (referred to as warmed and warming from here onwards), have resulted in ecosystem-scale impacts to the carbon cycle, including changes in decomposition and CO 2 emissions (Allison and Treseder , Melillo et al. ; Romero-Olivares et al. , ). These impacts are partially caused by community-level changes, where pathogenic and other weak-decomposer fungi (i.e. fungi with a limited suite of enzymes to break down organic matter) increase in abundance under warming, investing more resources in cell metabolic maintenance rather than in decay (e.g. Treseder et al. , Solly et al. , Morrison et al. , Romero-Olivares et al. ). However, we know very little about how fungi at the species-level are responding and adapting to global change drivers. We know even less about what these responses and potential physiological and molecular adaptation pathways look like in natural soil microbial communities. Fungal species-level responses to global change drivers have been documented in laboratory settings. For example, Neurospora discreta was experimentally evolved for 1500 generations under elevated temperature conditions, resulting in greater resource investment in respiration and spore production at the expense of biomass production (Romero-Olivares et al. ). Other shorter time scale studies found similar results; catabolic processes, such as growth and respiration, are impacted by elevated temperature (Malcolm et al. , Crowther and Bradford ). Acclimation studies in Neurospora crassa revealed that when exposed to heat shock (i.e. temperature shift from 15°C to 42°C), N. crassa invested in the production of molecules for cell homeostasis, such as heat shock proteins, while arresting the production of cell morphogenesis proteins, such as actin and tubulin (Mohsenzadeh et al. ). Whether or how microbial species respond and/or adapt to warming in natural soil environments, remains largely unknown (DeAngelis et al. ). Tracking species-level responses to warming in natural environments can offer insight into potential adaptation pathways. Here, we tracked species-level responses in a natural soil environment by mapping community level soil metatranscriptomes against the genome of two wild fungal species isolated from control conditions and warmed treatment soils in a long-term field warming experiment. We chose Mortierella spp. and Penicillium swiecickii (referred to as Mortierella and Penicillium hereinafter) for two main reasons. First, they were previously found to be the most abundant and presumably active species—based on transcript counts—in control conditions and warmed treatment soils alike (Romero-Olivares et al. ). Second, these fungi are free-living and easy to isolate and grow in culture compared to, for example, ectomycorrhizal fungi, which require a host. This meant that we were able to consistently isolate them from soil samples from both control conditions and warmed treatment soils. Since the fungal community shifts in composition in response to global change drivers (e.g. Treseder et al. , Morrison et al. ), species that are highly abundant under control conditions may decrease in abundance or even disappear under treatment conditions, therefore, isolating the same fungal species from different soil samples is very challenging. Our objective was to investigate how individual fungal species respond to global change drivers in a natural soil environment to advance our understanding on fungal responses to climate change and to gain insight on potential adaptation pathways. Specifically, we investigated potential physiological changes at the species level in a natural soil environment exposed to global change drivers, which provides a more realistic overview of fungal responses to global climate change compared to studies done under controlled laboratory settings. We addressed our objective by asking the following questions, (i) What changes do Mortierella and Penicillium experience, at the transcription level, when exposed to warming in a natural soil environment? (ii) What functional pathways and genes are affected in each species in response to warming? (iii) Are there any impacts to gene regulation in response to warming? and (iv) How are Mortierella and Penicillium strategizing resource investment under warming ?
Our field warming experiment was located in a mature black spruce ( Picea mariana ) forest in Delta Junction, Alaska, United States (63°55’N, 145°44’W). The onset of this experiment happened in the summer of 2005 (Allison and Treseder ). Briefly, greenhouses and neighboring control plots were established in pairs in a 1 km 2 area; control plots were left untouched, while greenhouses (i.e. warmed treatment) warmed the soil passively during the growing season (May-September) using closed-top chambers (n = 4). The top plastic panel was removed (September-May) to allow snow fall to reach the plots. The air inside the greenhouses was 1.6°C higher, on average, compared to control plots. The soil temperature at a depth of 5 cm was 0.5°C higher inside the greenhouses compared to control plots. These increases in temperature are within the expected range for high latitude ecosystems under global climate change (IPCC ). During the growing season, gutters and tubing re-directed precipitation into the greenhouses to minimize drying. However, the warming treatment resulted in higher evapotranspiration and reduced soil moisture by 22%, on average (i.e. warming-induced drying). In the summer of 2015, we collected four soil cores from the top 10 cm from inside center of each greenhouse and control plots (332 cm 3 ) (n = 4) and placed them inside a plastic sterile Whirl-Pak®. Approximately one gram of soil was immediately soaked in 5 ml LifeGuard TM Soil Preservation Solution (Qiagen, catalog 12 868) for RNA extraction avoiding soil disturbance as much as possible, to prevent transcription level changes. The preserved soil solution and the soil samples were kept in a cooler with ice for 24 h and transferred to a −80°C freezer and 4°C refrigerator, respectively. The preserved soil solution and the soil samples were processed within a week of collection. The protocol for extracting RNA and sequencing of metatranscriptomics was described in detail in Romero-Olivares and collaborators ( ). Briefly, the Joint Genome Institute (JGI) used rRNA depletion protocols to prepare paired-end libraries, which were then fragmented and reverse transcribed. The fragmented cDNA was treated with end-pair, A-tailing, adapter ligation, and 10 or 15 cycles of PCR and sequenced using a HiSeq 2500 system. Sequencing projects are deposited at the JGI with project ids: 1107–496, –499, –504, –507, –509, –514, –519, and –520. Simultaneously, we carried out various isolation methods for culturing fungi from soil samples. Briefly, we prepared petri plates with malt extract agar (MEA) (20 g/L of agar, 5 g/L malt extract, 5 g/L yeast extract) and potato dextrose agar (PDA) (39 g/L of potato dextrose agar dehydrated, MP Biomedicals™) and proceed to isolate fungi by two different methods. The first method was sprinkling 0.5 g of soil directly onto the MEA and PDA plates. The second method was by dilution-to-extinction, where 1 g of soil was diluted in 10 mls of autoclaved water under sterile conditions and then diluted serially 5 times (1:10, 1:100, 1:1000, 1:10 000, 1:100 000). From each dilution, we used 50 µl to inoculate in MEA and PDA plates. This resulted in approximately 60 petri plates that we incubated under two different conditions: 30 petri plates were incubated at 22°C for 7 days, and 30 more petri plates were incubated at 10°C for 3 days to discourage growth of fast growers, and then moved to 22°C for 5 more days. We randomly selected 8 colonies from each plate (480 total colonies), inoculated them in PDA plates to obtain a clean individual colony, incubated at 22°C for 7 days, and extracted DNA using the CTAB method. We amplified the ITS region using ITS1-ITS4 primers (White et al. ) and sequenced the amplicons using Sanger sequencing. We obtained good quality sequence data for 341 isolates and used BLAST (Sayers et al. ) to determine identity. We identified 10 isolates of Mortierella and 17 of Penicillium from different control and warmed plots. Once we determined we had the same species (i.e. ≥99% similarity in the ITS region), we chose four isolates for our study (two from each species; one from warmed treatment and one from control conditions) and deposited sequences in NCBI GenBank ( Penicillium control, accession number: MW474735 ; Penicillium warmed, accession number: MW474736 ; Mortierella control, accession number: MW474738 ; Mortierella warmed, accession number: MW474737 ). We sent high quality DNA of these four colonies to the JGI to sequence their whole genome. These sequencing projects are deposited at the JGI with project ids: 1144–747, -771, -787, -789. Metatranscriptomes and whole genomes were quality trimmed by removing adapters with Trimmomatic (v 0.39) using ILLUMINA TruSeq3-PE adapters with sliding window 4:15 and dropping reads below 25 bases long (Bolger et al. ) and quality checked with FastQC (v 0.11.5) (Andrew ). We assembled genomes using SPAdes (v 3.13.1) (Bankevich et al. ), quality assessed with QUAST (v 4.5) (Gurevich et al. ), and indexed with STAR (v 2.7.5c) (Dobin et al. ). Metatranscriptomes were aligned and mapped to whole genomes using STAR (v 2.7.5c) with ‘twopassMode Basic’ due to a lack of annotated reference genomes (Dobin et al. ). We used Cufflinks (v 2.2.1) with the default normalization and false discovery rate to estimate transcript abundance and test for differential expression (Trapnell et al. ). This pipeline resulted in multiple tables including transcript counts for control and warmed treatment samples, fold change data (i.e. the degree of change of transcript counts between control and warming in relation to the mean of normalized counts), and DNA sequences for each transcript. We manually blasted each transcript against the GenBank database to identify them (Sayers et al. ). We selected a consensus gene based on % identity (≥ 80%), alignment length (≥ 100 bp), and E-value (≤ 1e −50 ), with a few exceptions (i.e. E-values ≥ 1e −50 ) ( Table S1 ). We categorized transcripts based on InterPro (Blum et al. ) as having functions related to catabolic processes, cell homeostasis, cell morphogenesis, DNA regulation and organization, or protein biosynthesis (Table ). A subset of genes in Mortierella and Penicillium could not be identified because BLAST resulted in ‘hypothetical protein’ or ‘uncharacterized protein’. Thus, this subset of genes was left out of the analysis (listed as “unknown” in Table S1 ). We identified ATP synthase, cytochrome c oxidase, heat shock proteins, histones, NADH dehydrogenase, ribosomal proteins, and translation elongation factor as genes of interest since they were the genes that were transcribed the most (i.e. more than 20 different transcripts each). We used lme4 and lmerTest package in R (Bates et al. , Kuznetsova et al. , R Core Team ) to carry out mixed models for each functional category, individually for specific genes of interest, and for gene expression. For each functional category, warmed treatment and species were fixed factor, plot was random factor, and transcript counts was the response variable; we used post hoc t-test to determine significant differences between species, warmed treatment, and functional category. For genes of interest, we ran individual models for each gene of interest in each species; warmed treatment was the fixed factor, plot was the random factor, and transcript count was the response variable. For gene expression data of functional categories, species was fixed factor, plot was random factor, and expression fold change was the response variable. For gene expression of genes of interest, we ran individual t-tests comparing up regulated fold change expression between species, as well as down regulated fold change expression between species. In all cases, we used P ≤ 0.05 as significant. Our analyses were non-parametric because we ranked all data. The scripts for bioinformatics and statistical tests were deposited at https://github.com/adriluromero/warming_metagene . Computations were performed on Premise, a central, shared HPC cluster at the University of New Hampshire.
To answer our first question, we found that there was no significant difference between overall transcript counts of Mortierella and Penicillium between control and warmed plots ( P = 0.21), corroborating that these fungi were equally active under control and warmed conditions (Fig. ). However, a breakdown of the transcript counts by functional category showed interspecific differences in the response of functional pathways and genes to warming. In all functional categories, except for protein biosynthesis, there was a significant interaction between species and warmed treatment, revealing that Mortierella and Penicillium responded differently to warming (catabolic processes, Fig. , P = 0.05; cell homeostasis Fig. , P < 0.01; cell morphogenesis Fig. , P < 0.01; DNA regulation and organization Fig. , P < 0.01; protein biosynthesis Fig. , P = 0.60). Specifically, under warming, Penicillium had on average, significantly higher transcript counts for all functional categories, except protein biosynthesis, when compared to control conditions (catabolic processes, Fig. , P < 0.01; cell homeostasis Fig. , P = 0.04; cell morphogenesis Fig. , P < 0.01; DNA regulation and organization Fig. , P = 0.05; protein biosynthesis Fig. , P = 0.64). In contrast, under warming, Mortierella had on average, significantly less transcript counts for cell homeostasis and DNA regulation and organization compared to control conditions (Fig. , P < 0.01 and Fig. , P = 0.01); but catabolic processes, cell morphogenesis, and protein biosynthesis were not significantly different between control and warmed samples (Fig. , P = 0.39; Fig. , P = 0.07; Fig. , P = 0.24, respectively). Analyses for specific genes of interest showed that the transcription for ATP synthase and cytochrome c oxidase was significantly higher under warmed treatment compared to control conditions in Penicillium (Fig. , P < 0.01 and Fig. , P = 0.05, respectively), and that translation elongation factor was significantly lower in warmed treatment compared to control conditions in Mortierella (Fig. , P = 0.05 ). Moreover, our results show that although very few genes were significantly up or down regulated in response to warming ( Table S1 ), Mortierella ’s fold change expression was significantly down regulated at lower fold change compared to Penicillium in all functional categories (catabolic processes, Fig. , P < 0.01; cell homeostasis Fig. , P < 0.01; cell morphogenesis Fig. , P < 0.01; DNA regulation and organization Fig. , P < 0.01; protein biosynthesis Fig. , P = 0.01). Whereas Penicillium ’s fold change expression in response to warming was, on average, significantly upregulated at higher fold change compared to Mortierella (catabolic processes, Fig. , P = 0.03; cell homeostasis Fig. , P = 0.01; cell morphogenesis Fig. , P < 0.01; DNA regulation and organization Fig. , P = 0.02; protein biosynthesis Fig. , P < 0.01). In other words, although both species had up and down regulated genes, Mortierella consistently downregulated at a lower fold change under control conditions compared to Penicillium , and Penicillium always upregulated at a higher fold change under control conditions compared to Mortierella . For specific genes of interest, fold change expression for all genes, except cytochrome c oxidase, was significantly down regulated at lower fold change in response to warming in Mortierella compared to Penicillium (ATP synthase, Fig. , P < 0.01; cytochrome c oxidase, Fig. , P = 0.25; elongation factor, 3j, P < 0.01; heat shock protein, 3k, P < 0.01; histone, 3l, P < 0.01; NADH dehydrogenase, 3 m, P < 0.01; ribosomal protein, 3n, P < 0.01). In addition, only NADH dehydrogenase and ribosomal proteins were significantly upregulated at higher fold change in Penicillium compared to Mortierella (Fig. , P = 0.01 and Fig. , P = 0.02, respectively). In terms of strategizing resource investment under warming, Mortierella and Penicillium again displayed significant differences (Fig. ). In response to warming, Mortierella transcribed mostly genes involved in glutamate metabolism (1-pyrroline-5-carboxylate dehydrogenase) and methylation control (adenosylhomocysteinase). Penicillium transcribed many genes, including those involved in biosynthesis of pyrimidine (aspartate carbamoyltransferase), citric acid metabolism (citrate synthase), biosynthesis of glutamine (glutamine synthase), breakdown of sugars (glycoside hydrolase and transketolase), secondary metabolites (terpenoid synthase), breakdown of xylose (xylose reductase), and metabolism of sulfur (sulfite reductase) ( Table S1 ). Some genes were transcribed by both fungal species but differed in their warming response. For example, both fungi transcribed genes involved in urea production and glycolysis, but Penicillium transcribed more under warming in contrast to Mortierella , which transcribed less. As such, fold change expression for these genes appeared as upregulated for Penicillium and down regulated for Mortierella ( Table S1 ) (Fig. ). Moreover, in response to warming, Penicillium transcribed genes involved in cell wall formation, such as 1,3-beta-glucanosyltransferase and α-glucan synthase, as well as membrane proteins, actin, and tubulin. Mortierella also transcribed genes for actin and tubulin but in lower abundance ( Table S1 ).
Protein biosynthesis genes were the most abundant transcripts in Penicillium and Mortierella in both control conditions and warmed treatment (Fig. and Table S1 ). Most protein biosynthesis transcripts in our data are involved in ribosome biogenesis (e.g. 40S and 60S ribosomal proteins). The production of ribosomes is an energy demanding process associated with rapid growth, but can also be associated with oxidative stress protection (Albert et al. ). Under control conditions, Mortierella and Penicillium may be transcribing ribosomal proteins for rapid growth, while under warmed conditions ribosomal proteins may be conferring protection to oxidative stress. Since Mortierella and Penicillium maintained and increased, respectively, their investment in catabolic processes under warming (Fig. ), reactive oxygen—a by-product of aerobic metabolism—may be accumulating in cells, causing oxidative stress (Shimizu ). Therefore, under warming, Mortierella and Penicillium may need to keep up with the production of ribosomes for protection against oxidative stress. This strategy may be especially needed for Mortierella , since antioxidant-related genes were transcribed at lower abundance under warming and drying (e.g. thioredoxin) ( Table S1 ). Interestingly, ribosome biogenesis has been positively correlated with the ability of certain fungi to rapidly consume glucose through the fermentation pathway (Mullis et al. ). This relationship could allow them to keep growing through the consumption of sugars via low-efficiency fermentation (Mullis et al. ). Indeed, the ability to ferment, especially under aerobic conditions (i.e. Crabtree effect) has been associated with a selective advantage in yeast (Piškur et al. ). But the ability to ferment under aerobic conditions has also been documented in other fungi (e.g. Mullis et al. ). In our study, genes that may be related to the process of fermentation, such as zinc-dependent alcohol dehydrogenase (Raj et al. ), were transcribed at higher abundance under warming in Penicillium ( Table S1 ). Although alcohol dehydrogenases have other functions that do not necessarily relate to fermentation, these results suggest that Penicillium could be fermenting to acquire energy under warming. If so, this strategy might be providing Penicillium with a competitive advantage over other fungi in response to the warming treatment. Only two specific genes of interest, ATP synthase and cytochrome c oxidase, were transcribed more under warming compared to control conditions, and only in the case of Penicillium (Fig. ). Even though the transcription of heat shock proteins, as a whole, was not significantly different between control conditions and warmed treatment (Fig. ), the transcription of heat shock protein 70 and 90, which are known to have a role in morphogenesis, heat stress, and pH stress (Tiwari et al. ) was highly upregulated in response to warming in both fungal species ( Table S1 ). These results suggest that under warming, Mortierella and Penicillium may be investing in the production of protective molecules since they may have been experiencing heat stress and/or pH stress. Similar results have been reported in other fungi exposed to heat stress and/or pH stress. Specifically, Schizophyllum commune transcribed heat shock protein 70 and 90 after experiencing a shift in temperature from 21°C to 55°C (Higgins and Lilly ). Also, Neurospora crassa and Aspergillus nidulans transcribed genes for heat shock protein 70 and 90 in response to heat shock and extracellular pH changes (Mohsenzadeh et al. , Squina et al. , Freitas et al. ). Even though Mortierella and Penicillium experienced a relatively small temperature shift (∼0.5°C–1.5°C on average), these changes in abiotic conditions over a long period of time (∼10 years) may have been exerting chronic stress, resulting in the upregulation of certain heat shock protein genes in response to warming ( Table S1 ). Although these proteins are known to be produced as a response to unfavorable conditions, biotic or abiotic, they also have a role in basic biological processes, such as gene transcription and protein translation (Tiwari et al. ). Aside from changes in gene expression, fungi may shift the function of heat shock proteins and use them for transcription and translation under control conditions, and for cell homeostasis and protection under warming. Accordingly, this shift in function would not change the number of transcripts between control and warmed samples. It has been proposed that the upregulation of specific genes plays a critical role in the retention of said genes (Zhang et al. ). Thus, further attention to upregulation of genes in wild fungal communities exposed to global change drivers may provide insight into adaptation strategies and traits that may be under selection. We speculate that increases in transcription of most functional categories in Penicillium but not Mortierella (Fig. and Table S1 ) could support the idea that even though both species were active under warming, Penicillium seemed to be thriving while Mortierella seemed to be surviving (Fig. ). Specifically, Penicillium showed increased activity in most metabolic processes, except protein biosynthesis which remained unchanged (Fig. ). In addition, we found evidence that that Penicillium may have been actively growing because it transcribed genes involved in cell wall and membrane formation. Contrastingly, Mortierella showed reduced activity in cell homeostasis and DNA regulation and organization, and no change in catabolic processes, cell morphogenesis, and protein biosynthesis which may indicate that Mortierella was investing in cell structure maintenance rather than growth. Altogether, this suggests that responses to warming, and thus, potential adaptation pathways, may be species-specific. However, both species either maintained or increased investment in catabolic processes, cell morphogenesis, and protein biosynthesis, suggesting that prioritizing those processes may be critical for their survival. But the fact that the total transcription activity of neither fungi differed between control conditions and warmed treatment suggests that the interspecific differences in functional gene transcription did not result in a change in total activity levels of the fungi (Fig. ) and that both fungi have been able to survive a decade of chronic stress. By studying two fungal species and their response to warming, we present a model study to track species-level responses to global change drivers in a natural soil environment. Our results represent a snapshot specific to the day and time when we collected soil. Thus, future studies should concentrate efforts on changes across time (i.e. minutes, days, months, years) as research has shown that microbial resource investment is highly dynamic and varies with season (Žifčáková et al. , ). Considering micro-scale variations should also be a priority, as we found substantial plot variation ( Fig. S1 ) which was probably the effect of plot-specific differences in soil conditions and/or plot microclimate. Even though these variations probably do not have an effect at the ecosystem scale in our work, the added effect of microscale interactions may give rise to large-scale effects on biogeochemical cycles (Kim and Or , König et al. ). Moreover, future studies should explore more than two fungal species to provide a broader overview of the metabolic investment and potential adaptation strategies that fungi are undergoing when exposed to warming. Specifically, investigating how transcription changes in fungi that increase and decrease in abundance under warming may provide a good understanding of which genes provide a competitive advantage/disadvantage under stress. Similarly, investigating fungi with different ecological functions and focusing on functional genes, such as decomposition related genes (e.g. CAZy) (Lombard et al. ), provides an overview on how warming is affecting the fungal community more broadly, as well as the carbon cycling processes they mediate. Although we identified some CAZy transcripts in our dataset, these were not significantly different between the control conditions and warming treatment ( Table S1 ). Finally, future studies will benefit from increased computational power in high-performance computer clusters and the development of memory-efficient software, as access to random access memory (i.e. RAM) limited the amount of samples that we could analyze (Romero-Olivares et al. ). In conclusion, our work offers insight into how two fungal species are responding to warming in a natural soil environment. We present a model study, which can be replicated in other ecosystems, to track species-level responses in a natural soil environment and provide insight into the specific strategies that local fungal species undergo to ensure their survival under global climate change. We found evidence that investing in the transcription of critical genes involved in catabolic processes, cell morphogenesis, and protein biosynthesis under warming has allowed Mortierella and Penicillium to withstand over a decade of chronic stress. This suggests that investing in maintaining catabolic rates and processes while growing and protecting their cells may be a good strategy for fungi to survive under global climate change.
fnad128_Supplemental_File Click here for additional data file.
|
Carl Wilhelm Sem-Jacobsen | f3217a82-42a2-4249-9a04-9eaf8607d87e | 8826466 | Physiology[mh] | Sem-Jacobsen was originally hired at Gaustad Hospital to improve the methods for frontal lobotomy. Lobotomy was at that time widely practiced around the world and often with nonstereotactical techniques. The procedure was common across Norway, often performed by non-neurosurgeons and with high morbidity and mortality. At Sem-Jacobsen's EEG institute at Gaustad Hospital, neurosurgeons from Rikshospitalet (The National Hospital—now part of Oslo University Hospital) implanted depth electrodes stereotactically into the brain, and Sem-Jacobsen then performed neurophysiologic recordings and electrical test stimulation before making a lesion. Sem-Jacobsen wrote that “by introducing depth electrodes in the general area where the lesion is to be made, and thus plotting the electrical activity and the responses to electrical stimulation of these electrodes, it is possible to reduce considerably the size of the lesion.” Lobotomy has remained controversial. No detailed clinical data from Gaustad are available for effect, adverse effects, or complications, but contemporary neurosurgeons confirm that surgical complications and mortality were significantly reduced by this prelesion brain mapping (Dr. Eivinn Hauglie-Hanssen, personal communication). Despite working in a mental hospital, Sem-Jacobsen soon started to focus on Parkinson disease. Stereotactic neurosurgery for Parkinson disease with thalamotomy, pallidotomy, and lesions at other targets had been introduced around 1950. Sem-Jacobsen and his collaborators took up this at Gaustad Hospital a few years later. Sem-Jacobsen himself constructed new electrodes that could be used for extensive electrical mapping and stimulation at various brain sites to identify the best target and through which he could inject a toxin (ethyl cellulose in ethanol) to make a small chemical lesion. With this new technique, only 1 neurosurgical operation was needed. The practical procedures have been well documented, but there is a lack of published articles with clinical follow-up data. Sem-Jacobsen and his colleagues presented their observations at various national and international neurology meetings. They reported good clinical long-term results and low complication rates, for example, with only 1 death among 30 operated patients. Mortality rates for lesion surgery in Parkinson disease at that time were usually much higher. Tygstrup and Nørholm at the same time reported on 12 operated patients from Denmark, none of which survived for more than 3 years. After the neurosurgical implantation of electrodes, Sem-Jacobsen's patients were recorded and stimulated over several weeks before a chemical lesion was made and the electrodes removed . Results from deep brain stimulation in 10 of these patients were published in the proceedings of the 1962 Scandinavian Neurology Meeting. In this meeting, they also presented a film documenting the treatment, but this film has later been unavailable. Although their aim never was to use permanent stimulation as a treatment, their diagrams document that chronic deep brain stimulation was effective on bradykinesia as well as tremor and rigidity. In a historical review, Hariz et al. have identified this as the first detailed account of deep brain stimulation in Parkinson disease.
While doing studies with depth electrodes in the brain, Carl Wilhelm Sem-Jacobsen also developed tools for other neurophysiologic recordings. He was the son of a famous Norwegian flight pioneer and was always fascinated by aviation . Supported by the US Air Force and NASA and in collaboration with Danish engineer Edmund Kaiser, he developed a minute 4-channel EEG recorder and EKG recorder that could be used for in-flight recordings. The system was named Vesla (Norwegian for “Little girl”) after the nickname for Sem-Jacobsen's wife. Electrodes were glued to the scalp and to the chest, and the system was used for monitoring air fighter pilots as well as the astronauts of the Apollo Moon landing program . With airborne EEG recordings and simultaneous films, he documented that a number of active jet fighter pilots had brief periods of unconsciousness during stressful maneuvers, revealing a possible reason for pilot errors that could explain a number of aircraft accidents. Sem-Jacobsen later developed the system also to include a Vesla aircraft seat pad for the EKG monitoring of pilots. Neil Armstrong wore equipment for EEG and EKG monitoring developed by Sem-Jacobsen when taking his first steps on the Moon . The Vesla system was also used for testing US Navy divers (unpublished). Later, Sem-Jacobsen took up neurophysiologic monitoring in nonmilitary divers with special focus on deep sea divers working for North Sea oil drilling companies and their working environment.
When Sem-Jacobsen built his EEG institute at Gaustad Hospital, he soon started the collaboration with neurosurgeons to implant his self-constructed depth electrodes before chemical lesion. Each patient could have at least 8 electrodes, each with many channels and points for recording or stimulation. He could thus do extensive mapping at different sites in the brain. In the beginning, these recordings were made in psychiatric patients before lobotomy, but he later worked primarily with patients with Parkinson disease. The special nature of these recordings and the electrical stimulation trials were considered frightening and mysterious both by employees at Gaustad Hospital and by others. Besides as a neurophysiologist, Sem-Jacobsen was more interested in constructing medical technical devices and performing prelesion recordings. Other clinicians did the long-term follow-up of the patients, and most of his publications are dealing with technical and experimental procedures. Because of the special funding of Sem-Jacobsen's institute by American military forces, rumors soon started to spread that he was performing secret mind control experiments. Since he during WW2 met and personally knew CIA Director William E. Colby, words were out that he was working for the CIA. Skepticism may also have been related to his involvement in lobotomy, a procedure that was abandoned few years later. The conspiracy theories were nourished by investigative journalists claiming that Norwegian authorities were also part of the plot and were supporting Sem-Jacobsen's secret mind control experiments for CIA and the US military forces. These assumptions were peaking in the year 2000 with the presentation of a conspiracy documentary on National Norwegian Television—TV2. Because of the serious allegations, the Norwegian Government in 2001 appointed a special multidisciplinary hearing committee to investigate whether unethical medical experiments had been performed on human beings in Norway during the period 1945–1975. The committee was especially asked to evaluate experiments with deep brain electrodes. The conclusions of this hearing committee were published in 2003. They state that all procedures in Sem-Jacobsen's neurophysiology institute at Gaustad Hospital were performed on strict indication for medical treatment. Furthermore, they found no evidence that Sem-Jacobsen received any financial or other support from the CIA. The hearing committee comments that registration and electrical stimulation may have been somewhat more extensive than necessary both concerning time and location in the brain, but they conclude that “this has not been to the patient's disfavor since the data also could be used to improve treatment in each patient. The relation between treatment and research must thus be considered as within ethical limits for medical research. The commission don't see that the extensive financial support received by Sem-Jacobsen from American sources will change this view.”
The hearing committee investigating Sem-Jacobsen did a meticulous work interviewing numerous people including health professionals, patients and caregivers, politicians, and American military and CIA superiors. They reached a definite conclusion, although some possibly relevant information was unavailable. They did not get access to secret archives from Pentagon and CIA. Furthermore, they were unable to locate Sem-Jacobsen's personal archive documenting his observations. The reason for this has later become evident: The Sem-Jacobsen family felt haunted by the journalists even many years after Carl Wilhelm Sem-Jacobsen's death in 1991, and they decided to burn all his personal files (Bjørn Erik Sem-Jacobsen, personal communication). Sem-Jacobsen also documented the different procedures on film like the aforementioned Parkinson deep brain stimulation film shown in 1962. These films later disappeared. The hearing committee searched for the films but were unable to find them. I have tried to locate Sem-Jacobsen's films for many years. Finally, I managed to get in touch with his family. They have long felt that Sem-Jacobsen's work has been undeservedly disregarded, and they therefore decided to help me to recover available information. With their help, I was eventually able to locate numerous of Sem-Jacobsen's films and photographs in an old barn in rural Norway. These films and photographs show how Sem-Jacobsen and his collaborators performed in-action neurophysiology recordings in divers, pilots, and astronauts. One film gives detailed documentation on how he in collaboration with experienced neurosurgeons conducted the first trials with deep brain stimulation in patients with Parkinson disease . Modern subthalamic deep brain stimulation for Parkinson disease was introduced in 1995 by the Grenoble group. It appears from the old films that Sem-Jacobsen already in the 1950s tried deep brain stimulation in this area. Sem-Jacobsen shows that parkinsonian symptoms are relieved by electrical stimulation close to the red nucleus. Based on our current knowledge, it seems plausible that the subthalamic nucleus was the actual site of his stimulations. The Sem-Jacobsen family has now donated the old films to the National Medical Museum, which is part of the Norwegian Museum for Science and the Technologies in Oslo. The films are currently being digitized and will then be available for further studies of the achievements of this neurophysiologic pioneer.
|
CCN2/CTGF expression does not correlate with fibrosis in myeloproliferative neoplasms, consistent with noncanonical TGF-β signaling driving myelofibrosis | a388f7b7-e60b-415f-baf6-f3831879100e | 11106196 | Anatomy[mh] | The classical BCR::ABL1 -negative myeloproliferative neoplasms (MPN) include the subtypes essential thrombocythemia (ET), polycythemia vera (PV), and primary myelofibrosis (PMF). These form an important group of bone marrow (BM) diseases, characterized by the proliferation of cells of one or more of the myeloid lineages and the potential to undergo progression to myelofibrosis or acute myeloid leukemia. BM fibrosis, resulting from the deposition of reticulin fibers and sometimes also collagen fibers, is an important cause of morbidity and mortality in MPN patients as it impairs normal hematopoiesis leading to marrow failure and life-threatening cytopenias. Previous studies, mainly performed in myelofibrosis models, indicate that the mutant/malignant megakaryocytes contribute to the development of fibrosis by increased expression of fibrotic and pro-inflammatory cytokines and interleukins, growth factors (including transforming growth factor-β (TGF-β)), extracellular matrix components, and other factors . Still, many facets of the development of BM fibrosis and the sequential events that drive stromal activation and fibrosis remain elusive. Due to its critical involvement in many fibrotic processes, a role of Cellular Communication Network 2 (CCN2) in the pathogenesis of BM fibrosis seems likely. CCN2, also known as CTGF (Connective Tissue Growth Factor), is an extracellular matrix cellular protein belonging to the Cellular Communication Network (CCN)-family. It is considered an important driver and biomarker of organ fibrosis in a wide range of diseases . CCN2 is involved in the proliferation, migration, and differentiation of cells and can promote fibrosis directly or by acting as a factor downstream of TGF-β, which is a powerful and well-known inducer of CCN2 transcription , but many other factors also directly or indirectly induce CCN2 mRNA expression. The mode of action of CCN2 is complex and variable as illustrated in Fig. . The way(s) by which CCN2 act(s) on BM target cells is, however, still largely unknown. In previous studies, CCN2 mRNA expression has been detected in BM mesenchymal stem and stromal cells of normal BM . In addition, altered CCN2 mRNA expression levels have been associated with BM malignancies: in B-acute lymphoblastic leukemia, increased CCN2 mRNA levels are present in B-lymphoblasts, whereas in acute myeloid leukemia, CCN2 mRNA overexpression has been detected in the mesenchymal stem/stromal cells . Furthermore, CCN2 mRNA extracted from BM biopsies of patients with myelofibrosis showed a 27-fold increase when compared to healthy controls, decreasing after allogeneic stem cell transplantation . None of these studies, however, investigated a possible relationship between CCN2 levels and BM fibrosis. Thus far, CCN2 protein expression in the BM has only been investigated in three previous studies . Chica et al. found positive staining of several cell populations of normal bone marrow but not in megakaryocytes, except for weak staining near the cell membranes , while Åström et al. reported cytoplasmic positivity for CCN2 in a subpopulation (18%) of megakaryocytes in 1 of 5 patients with X-linked thalassemia in almost all (97%) megakaryocytes of all 6 primary myelofibrosis patients, while other hematopoietic cell lineages and the megakaryocytes in normal control BM biopsies were negative . Shergill et al. published an abstract with intriguing observations suggesting a role for CCN2 as a biomarker and a potential target for therapy in MF . However, the abstract (naturally) contained only limited information and statistical detail, and the data presented suggested considerable scatter of data and limited statistical significance. CCN2 staining was found in a higher percentage of megakaryocytes in biopsies of myelofibrosis patients compared to controls (63% vs 40%). This difference did not reach statistical significance ( p = 0.28), but the mean percentage of CCN2-positive megakaryocytes was significantly higher in myelofibrosis patients at diagnosis (63%) compared to post-transplant biopsies 22% . A correlation between CCN2 expression and fibrosis was, however, not made. Unfortunately, no follow-up publication on this 2015 abstract has appeared to date, and no other studies have since then addressed the role of CCN2 in BM fibrosis. Therefore, we set out to investigate CCN2 protein expression by immunohistochemical staining in a large cohort of 75 BM biopsies (55 MPN patients and 20 normal controls) and correlated the results with the amount of BM fibrosis and other disease parameters.
Patients To study CCN2 protein expression, we performed immunohistochemistry on in total of 75 BM trephine biopsies, retrieved from the Department of Pathology of the University Medical Center Utrecht, the Netherlands. This cohort encompassed 55 BM biopsies of MPN patients and 20 BM biopsies with normal hematopoiesis. The 55 MPN biopsies consisted of 10 ET cases, 10 post-ET myelofibrosis (post-ET MF) cases, 10 PV cases, 10 post-PV myelofibrosis (post-PV MF) cases, 5 pre-fibrotic PMF (pre-PMF) cases, and 10 cases of PMF presenting in overt fibrotic phase. The 20 normal biopsies had all been obtained as part of the staging procedure for a hematologic neoplasm (mostly diffuse large B-cell lymphomas from immune-privileged sites and 1 case of a MALT lymphoma of the lung), with all patients showing normal blood values and normal hematopoiesis without lymphoma involvement of the BM. The trephine biopsies were formalin-fixed, EDTA-decalcified, and paraffin-embedded. Relevant clinical parameters were extracted from digital patient files. Fibrosis BM reticulin was stained by a silver stain according to Gordon and Sweet, Gomori’s silver impregnation, or by the reticulin stain from DAKO using the Artisan automatic stainer. All reticulin stains were performed on 4 µm thick BM sections. The amount of fibrosis was graded on a scale from 0 to 3, according to the European Consensus of diagnosing bone marrow fibrosis, with 0 being no fibrosis (MF-0), 1 slight fibrosis (MF-1), 2 moderate fibrosis (MF-2), and 3 severe fibrosis (MF-3) . Immunohistochemistry Immunohistochemistry for CCN2 was performed on 4 µm BM sections, using primary antibodies from Cell Signaling Technologies (CST) (cat. no 10095S and 86641S), and FG-3114/biotin, provided by FibroGen Inc. All cases were stained with the 10095S antibody. Five 5 MPN cases showing overexpression with the 10095S antibody and 5 normal control cases were additionally stained by the 86641S and FG-3114 antibodies. All slides were deparaffinized and endogenous peroxidase was blocked. For the CST antibodies, antigen retrieval in Tris/EDTA solution at pH 9 was done, followed by primary antibodies (incubation of 86641S for 1 h and 10095S overnight at 4 °C) and detection with BrightVision/HRP anti-rabbit Ig (VWR). As supplied, FG-3114 was conjugated to biotin. Here, endogenous biotin was blocked with a biotin blocking kit (Vector), followed by a primary antibody for 2 h and HRP-labeled streptavidin (Dako). After the HRP labeled reagents, peroxidase was visualized with nova red substrate (Vector labs), followed by a hematoxylin nuclear counterstain. The intensity of staining was scored semi-quantitatively on a scale from 0 to 3 according to the following staining categories: 0 = no staining, 1 = weak staining, 2 = moderate staining, 3 = strong staining. Contrasting moderate and very strong staining was confined to a subgroup of MPN cases. To highlight this feature, we applied an aggregated weighted score assigning increasing weight to moderate and strong staining patterns. The aggregated, weighted score was calculated as follows: 0 × (score 0) + 1 × (score 1) + 5 × (score 2) + 15 × (score 3). Molecular studies Driver mutations could be retrieved from patient files in 51 of 55 MPN cases. On the remaining 4 MPN cases, NGS was performed using a custom made panel (Ion AmpliSeqTM Haemat Panel) consisting of 43 genes. The presence of non-driver mutations was additionally investigated in the 7 MPN cases with CCN2 overexpression by NGS using the TruSight Oncology 500 (TSO-500) kit, testing 523 genes. Statistical analysis Statistics were done with IBM SPSS 29. Mann–Whitney U tests, and linear regression models were used for statistical analysis.
To study CCN2 protein expression, we performed immunohistochemistry on in total of 75 BM trephine biopsies, retrieved from the Department of Pathology of the University Medical Center Utrecht, the Netherlands. This cohort encompassed 55 BM biopsies of MPN patients and 20 BM biopsies with normal hematopoiesis. The 55 MPN biopsies consisted of 10 ET cases, 10 post-ET myelofibrosis (post-ET MF) cases, 10 PV cases, 10 post-PV myelofibrosis (post-PV MF) cases, 5 pre-fibrotic PMF (pre-PMF) cases, and 10 cases of PMF presenting in overt fibrotic phase. The 20 normal biopsies had all been obtained as part of the staging procedure for a hematologic neoplasm (mostly diffuse large B-cell lymphomas from immune-privileged sites and 1 case of a MALT lymphoma of the lung), with all patients showing normal blood values and normal hematopoiesis without lymphoma involvement of the BM. The trephine biopsies were formalin-fixed, EDTA-decalcified, and paraffin-embedded. Relevant clinical parameters were extracted from digital patient files.
BM reticulin was stained by a silver stain according to Gordon and Sweet, Gomori’s silver impregnation, or by the reticulin stain from DAKO using the Artisan automatic stainer. All reticulin stains were performed on 4 µm thick BM sections. The amount of fibrosis was graded on a scale from 0 to 3, according to the European Consensus of diagnosing bone marrow fibrosis, with 0 being no fibrosis (MF-0), 1 slight fibrosis (MF-1), 2 moderate fibrosis (MF-2), and 3 severe fibrosis (MF-3) .
Immunohistochemistry for CCN2 was performed on 4 µm BM sections, using primary antibodies from Cell Signaling Technologies (CST) (cat. no 10095S and 86641S), and FG-3114/biotin, provided by FibroGen Inc. All cases were stained with the 10095S antibody. Five 5 MPN cases showing overexpression with the 10095S antibody and 5 normal control cases were additionally stained by the 86641S and FG-3114 antibodies. All slides were deparaffinized and endogenous peroxidase was blocked. For the CST antibodies, antigen retrieval in Tris/EDTA solution at pH 9 was done, followed by primary antibodies (incubation of 86641S for 1 h and 10095S overnight at 4 °C) and detection with BrightVision/HRP anti-rabbit Ig (VWR). As supplied, FG-3114 was conjugated to biotin. Here, endogenous biotin was blocked with a biotin blocking kit (Vector), followed by a primary antibody for 2 h and HRP-labeled streptavidin (Dako). After the HRP labeled reagents, peroxidase was visualized with nova red substrate (Vector labs), followed by a hematoxylin nuclear counterstain. The intensity of staining was scored semi-quantitatively on a scale from 0 to 3 according to the following staining categories: 0 = no staining, 1 = weak staining, 2 = moderate staining, 3 = strong staining. Contrasting moderate and very strong staining was confined to a subgroup of MPN cases. To highlight this feature, we applied an aggregated weighted score assigning increasing weight to moderate and strong staining patterns. The aggregated, weighted score was calculated as follows: 0 × (score 0) + 1 × (score 1) + 5 × (score 2) + 15 × (score 3).
Driver mutations could be retrieved from patient files in 51 of 55 MPN cases. On the remaining 4 MPN cases, NGS was performed using a custom made panel (Ion AmpliSeqTM Haemat Panel) consisting of 43 genes. The presence of non-driver mutations was additionally investigated in the 7 MPN cases with CCN2 overexpression by NGS using the TruSight Oncology 500 (TSO-500) kit, testing 523 genes.
Statistics were done with IBM SPSS 29. Mann–Whitney U tests, and linear regression models were used for statistical analysis.
Patient characteristics In total, 75 BM biopsies were stained for CCN2 by the 10095S antibody, of which 55 MPN cases (10 × ET, 10 × post-ET MF, 10 × PV, 10 × p-PV MF, 5 × pre-PMF, 10 × PMF) and 20 control cases with normal marrow. The control group encompassed 15 males and 5 females, with a median age of 63.5 years (range 44–77 years). The MPN group encompassed 32 males and 23 females, with a median age of 60 years (range 16–77 years), being not significantly different from the control group. In the MPN group, 39 cases (71%) contained a JAK2 V617F mutation, 12 cases (22%) a CALR mutation, 2 cases (4%) an MPL mutation, 1 case (2%) was triple negative, and in 1 case, the driver mutation could not be determined due to insufficient DNA quality. CCN2 expression When stained for CCN2 by the 10095S antibody, the BM biopsies showed a variable degree of cytoplasmic staining of the megakaryocytes, while the myeloid and erythroid cell lineages were negative. The megakaryocytes showed variable staining, both within and between biopsies. The intensity of staining was scored semi-quantitatively on a scale from 0 to 3 (0 = no staining, 1 = weak staining, 2 = moderate staining, 3 = strong staining) as illustrated by Fig. . There was no extracellular staining, which is remarkable as CCN2 is generally considered to be an extracellular protein. Throughout the literature, this appears to be a consistent finding for CCN2 staining, also in other tissues , and it was confirmed with 2 additional independent antibodies in the present study. Possible explanations for this finding have thus far largely remained speculative. In normal BM, the majority (96–100%) of megakaryocytes in the biopsy showed no or weak staining as illustrated in Fig. a. A small number of megakaryocytes (up to 4%) within each biopsy displayed moderate or strong staining, with strongly staining megakaryocytes accounting for at most 2% of the total number of megakaryocytes. The CCN2 expression score ranged from 0.05 to 1.26 with a median score of 0.62. In MPN, the findings were largely similar, although a few cases ( N = 7, 13%) clearly stood out by a much stronger staining of the megakaryocytes with a markedly increased number of megakaryocytes displaying moderate or strong cytoplasmic staining, as illustrated in Fig. b . The moderate/strong staining megakaryocytes could morphologically not be discerned from the negative/weak staining megakaryocytes within the same biopsy. The CCN2 score ranged from 0.00 to 5.90 with a median score of 0.54. The CCN2 score of the MPN group as a whole was not statistically different from the normal control group ( p = 0.741). The CCN2 scores of all 75 normal and MPN cases were analyzed by a box-and-whisker plot, which showed 7 outliers (i.e., values greater than 1.5 IQR plus the third quartile), with a significant higher CCN2 expression than the rest of the cases. These cases with a high number of moderate and strong staining megakaryocytes consisted of 3 ET cases (CCN2 scores: 2.82, 3.10, and 3.38), 1 pre-PMF case (CCN2 score: 2.90), 1 post-PV MF case (CCN2 score: 4.65), and 2 PMF cases (CCN2 scores: 2.42 and 5.90). Correlation of CCN2 expression with fibrosis All of the normal BM biopsies and 17 of 75 (23%) MPN cases (10 × ET, 5 × PV, 2 × pre-PMF) showed no fibrosis (MF-0). Mild fibrosis (MF-1) was found in 7 MPN cases (3 × pre-PMF and 4 × PV), moderate fibrosis (MF-2) in 11 MPN cases (7 × post-ET MF, 2 × post-PV MF, 2 × PMF), and severe fibrosis (MF-3) in 18 MPN cases (3 × post-ET MF, 8 × post-PV MF, 7 × PMF). No correlation was found between the CCN2 score and the amount of BM fibrosis ( p = 0.966). Correlation of CCN2 expression with clinical parameters CCN2 scores did not correlate with age, sex, type of driver mutation, blood values (hemoglobin, leucocytes, platelets, LDH) or the occurrence of thrombovascular events in MPN patients. There was no significant difference in CCN2 score between the different MPN subgroups ( p = 0.703). CCN2 protein overexpression in MPN In 7 MPN cases (13%), immunohistochemical staining by the 10095S antibody showed significant CCN2 overexpression of megakaryocytes. These were 3 ET cases, 1 post-PV MF, 1 pre-PMF, and 2 PMF cases. Their characteristics are shown in Table . Four of them were males and three were females, and the median age was 58 years (range 24–70 years), which was not significantly different ( p = 0.338) from the 48 MPN cases without CCN2 overexpression (median age 62 years, range 23–77 years). Blood values (hemoglobin, leukocyte counts, platelet counts, LDH) were not significantly different from patients without CCN2 overexpression. Thrombovascular events were as common in MPN patients with CCN2 overexpression (28%) as in those without (27%). Four cases showed no fibrosis (MF-0), 2 showed moderate fibrosis (MF-2), and 1 severe fibrosis (MF-3). Six of the MPN cases with CCN2 overexpression contained/showed a JAK2 V617F mutation/clone with a variant allele frequency (VAF) ranging from 5.8 to 79%, not significantly correlating with the CCN2 score ( p = 0.090). The seventh, one of the PMF cases, contained/showed a CALR exon 9 (p.Lys385fs, type II) mutation/clone. The presence of additional pathogenic non-driver mutations was investigated, and an additional mutation was detected in 6 of the 7 cases, being DNMT3A (ET and PMF), NFAIP3 (ET), TP53 (post-PV MF), SF3B1 (pre-PMF), and CUX1 (PMF). One patient with PMF deceased due to a pneumonia and the patient with post-PV MF deceased after having progressed to a blast phase (in the form of an acute megakaryoblastic leukemia). One patient with ET showed progression to PV. The other patients did not show progression, albeit the follow-up time was limited. Staining with additional CCN2 antibodies To assess whether the CCN2 staining of the megakaryocytes by the 10095S antibody was specific, 5 MPN cases showing overexpression as well as 5 normal BM biopsies were additionally stained by 2 other CCN2 antibodies: FG-3114 and the 86641S antibody. Like the 10095S antibody, these two other CCN2 antibodies also showed cytoplasmic staining of the megakaryocytes. In addition, the aggregated, weighted staining score for the FG-3114 and the 86641S was higher in the 5 selected MPN cases than in the controls, further supporting the notion that the staining with these 3 antibodies indeed reflects CCN2 protein expression.
In total, 75 BM biopsies were stained for CCN2 by the 10095S antibody, of which 55 MPN cases (10 × ET, 10 × post-ET MF, 10 × PV, 10 × p-PV MF, 5 × pre-PMF, 10 × PMF) and 20 control cases with normal marrow. The control group encompassed 15 males and 5 females, with a median age of 63.5 years (range 44–77 years). The MPN group encompassed 32 males and 23 females, with a median age of 60 years (range 16–77 years), being not significantly different from the control group. In the MPN group, 39 cases (71%) contained a JAK2 V617F mutation, 12 cases (22%) a CALR mutation, 2 cases (4%) an MPL mutation, 1 case (2%) was triple negative, and in 1 case, the driver mutation could not be determined due to insufficient DNA quality.
When stained for CCN2 by the 10095S antibody, the BM biopsies showed a variable degree of cytoplasmic staining of the megakaryocytes, while the myeloid and erythroid cell lineages were negative. The megakaryocytes showed variable staining, both within and between biopsies. The intensity of staining was scored semi-quantitatively on a scale from 0 to 3 (0 = no staining, 1 = weak staining, 2 = moderate staining, 3 = strong staining) as illustrated by Fig. . There was no extracellular staining, which is remarkable as CCN2 is generally considered to be an extracellular protein. Throughout the literature, this appears to be a consistent finding for CCN2 staining, also in other tissues , and it was confirmed with 2 additional independent antibodies in the present study. Possible explanations for this finding have thus far largely remained speculative. In normal BM, the majority (96–100%) of megakaryocytes in the biopsy showed no or weak staining as illustrated in Fig. a. A small number of megakaryocytes (up to 4%) within each biopsy displayed moderate or strong staining, with strongly staining megakaryocytes accounting for at most 2% of the total number of megakaryocytes. The CCN2 expression score ranged from 0.05 to 1.26 with a median score of 0.62. In MPN, the findings were largely similar, although a few cases ( N = 7, 13%) clearly stood out by a much stronger staining of the megakaryocytes with a markedly increased number of megakaryocytes displaying moderate or strong cytoplasmic staining, as illustrated in Fig. b . The moderate/strong staining megakaryocytes could morphologically not be discerned from the negative/weak staining megakaryocytes within the same biopsy. The CCN2 score ranged from 0.00 to 5.90 with a median score of 0.54. The CCN2 score of the MPN group as a whole was not statistically different from the normal control group ( p = 0.741). The CCN2 scores of all 75 normal and MPN cases were analyzed by a box-and-whisker plot, which showed 7 outliers (i.e., values greater than 1.5 IQR plus the third quartile), with a significant higher CCN2 expression than the rest of the cases. These cases with a high number of moderate and strong staining megakaryocytes consisted of 3 ET cases (CCN2 scores: 2.82, 3.10, and 3.38), 1 pre-PMF case (CCN2 score: 2.90), 1 post-PV MF case (CCN2 score: 4.65), and 2 PMF cases (CCN2 scores: 2.42 and 5.90).
All of the normal BM biopsies and 17 of 75 (23%) MPN cases (10 × ET, 5 × PV, 2 × pre-PMF) showed no fibrosis (MF-0). Mild fibrosis (MF-1) was found in 7 MPN cases (3 × pre-PMF and 4 × PV), moderate fibrosis (MF-2) in 11 MPN cases (7 × post-ET MF, 2 × post-PV MF, 2 × PMF), and severe fibrosis (MF-3) in 18 MPN cases (3 × post-ET MF, 8 × post-PV MF, 7 × PMF). No correlation was found between the CCN2 score and the amount of BM fibrosis ( p = 0.966).
CCN2 scores did not correlate with age, sex, type of driver mutation, blood values (hemoglobin, leucocytes, platelets, LDH) or the occurrence of thrombovascular events in MPN patients. There was no significant difference in CCN2 score between the different MPN subgroups ( p = 0.703).
In 7 MPN cases (13%), immunohistochemical staining by the 10095S antibody showed significant CCN2 overexpression of megakaryocytes. These were 3 ET cases, 1 post-PV MF, 1 pre-PMF, and 2 PMF cases. Their characteristics are shown in Table . Four of them were males and three were females, and the median age was 58 years (range 24–70 years), which was not significantly different ( p = 0.338) from the 48 MPN cases without CCN2 overexpression (median age 62 years, range 23–77 years). Blood values (hemoglobin, leukocyte counts, platelet counts, LDH) were not significantly different from patients without CCN2 overexpression. Thrombovascular events were as common in MPN patients with CCN2 overexpression (28%) as in those without (27%). Four cases showed no fibrosis (MF-0), 2 showed moderate fibrosis (MF-2), and 1 severe fibrosis (MF-3). Six of the MPN cases with CCN2 overexpression contained/showed a JAK2 V617F mutation/clone with a variant allele frequency (VAF) ranging from 5.8 to 79%, not significantly correlating with the CCN2 score ( p = 0.090). The seventh, one of the PMF cases, contained/showed a CALR exon 9 (p.Lys385fs, type II) mutation/clone. The presence of additional pathogenic non-driver mutations was investigated, and an additional mutation was detected in 6 of the 7 cases, being DNMT3A (ET and PMF), NFAIP3 (ET), TP53 (post-PV MF), SF3B1 (pre-PMF), and CUX1 (PMF). One patient with PMF deceased due to a pneumonia and the patient with post-PV MF deceased after having progressed to a blast phase (in the form of an acute megakaryoblastic leukemia). One patient with ET showed progression to PV. The other patients did not show progression, albeit the follow-up time was limited.
To assess whether the CCN2 staining of the megakaryocytes by the 10095S antibody was specific, 5 MPN cases showing overexpression as well as 5 normal BM biopsies were additionally stained by 2 other CCN2 antibodies: FG-3114 and the 86641S antibody. Like the 10095S antibody, these two other CCN2 antibodies also showed cytoplasmic staining of the megakaryocytes. In addition, the aggregated, weighted staining score for the FG-3114 and the 86641S was higher in the 5 selected MPN cases than in the controls, further supporting the notion that the staining with these 3 antibodies indeed reflects CCN2 protein expression.
In this study, we investigated the protein expression of the profibrotic factor CCN2 in BM by immunohistochemistry. We included 20 normal BM biopsies and 55 BM biopsies of MPN patients and correlated the CCN2 staining results with the amount of BM fibrosis and clinical parameters. CCN2 protein expression was detected in a variable degree in the cytoplasm of megakaryocytes, while granulopoiesis and erythropoiesis were negative. High levels of CCN2 expression were seen in 7 (13%) MPN cases, but no correlation was observed between CCN2 expression and fibrosis, neither in the total study group nor in the subgroup with CCN2 overexpression. Thus (megakaryocytic) CCN2 expression appears not to be a key feature in the development of fibrosis in MPN, in contrast to the prominent role that CCN2 plays in the development of fibrosis in many organ diseases outside the marrow . A lack of correlation between CCN2 expression and BM fibrosis is remarkable, especially because the prototypical fibrosis inducer TGF-β, known to also contribute to myelofibrosis in MPN , is a potent inducer of CCN2, the two forming a positive feedback loop . However, whereas induction of CCN2 gene transcription by TGF-β typically involves the canonical SMAD2/3 pathway , a recent study showed that noncanonical c-Jun N-terminal kinase (JNK)-dependent TGF-β signaling in mesenchymal stromal cells is responsible for the development of BM fibrosis in MPN . Our findings may thus be interpreted as further evidence that the non-canonical, rather than the canonical pathway of TGF-β signaling drives BM fibrosis in MPN. This further supports the notion that the exploration of novel modalities for treatment and prevention of BM fibrosis in MPN patients might best focus on the noncanonical pathway of TGF-β signaling. Interestingly, a subgroup of 7 (13%) MPN cases showed significant CCN2 overexpression. Four of these showed no fibrosis (MF-0), 2 showed moderate fibrosis (MF-2), and 1 severe fibrosis (MF-3), not related to the amount of CCN2 expression. Blood values and clinical signs did not reveal a specific trend. Although CCN2 overexpression was specifically observed in megakaryocytes, no correlation was observed between the level of CCN2 expression and the platelet counts or occurrence of thrombovascular events in this small subgroup, but additional studies on larger cohorts of MPN patients will be needed to better investigate the possible meaning of the megakaryocytic CCN2 overexpression in the development or complications of MPN. We detected CCN2 protein to be mainly present in megakaryocytes, confirming the findings of previous studies . Chica et al. reported no staining of (normal) megakaryocytes , but in the study by Åstrom et al., all their 6 PMF cases showed cytoplasmic CCN2 staining, staining on average 97% of megakaryocytes, while their 6 normal controls were negative . Also, Shergill et al. described variable cytoplasmic staining of megakaryocytes, with a higher percentage of megakaryocytes showing positive CCN2 staining in myelofibrosis patients compared to healthy controls . Discrepancies between the studies might be explained by the use of different types of antibodies. To confirm our results and to rule out non-specific binding of the antibody, we used 2 other CCN2 antibodies binding to different CCN2 epitopes, and the results were in line with our findings with the 10095S antibody. Shergill et al. found mean CCN2 mRNA levels extracted from BM biopsies in myelofibrosis patients to be 27-fold increase compared to controls, but this difference was not significant. This might suggest that the elevated mean was caused by a small subgroup with very high expression, consistent with our observation of only a subgroup of MPN cases showing marked increased CCN2 protein expression. We have also tried to verify our results at the mRNA level by in situ hybridization on BM sections but did not obtain an interpretable result, probably due to poor mRNA quality after fixation and decalcification. As whole tissue sections were used in the study by Shergill et al., and not isolated megakaryocytes, definite proof that the source of the CCN2 mRNA in BM is the megakaryocytes is still lacking. Megakaryocytes contain and secrete many factors, by which they contribute to the functioning of the BM microenvironment, including the hematopoietic stem cell niche . They also participate in inflammation and immunity . Also, their role in MPN is well established, in which mutant megakaryocytes play a key role by promoting myeloproliferation and fibrosis . The heterogeneous staining of CCN2 of the megakaryocytes within individual biopsies might reflect different megakaryocyte subtypes, as was previously described , or differences between subclones. Morphologically, however, a specific subtype of megakaryocytes showing overexpression could not be detected, and also the degree of CCN2 overexpression did not correlate with the JAK2 mutant allele frequency. There also was no difference in CCN2 expression between JAK2 positive and JAK2 negative MPNs, as opposed to the trend reported in a previous small pilot study . Outside the BM, CCN2 and its fragments have been implicated not only in fibrosis but also in the regulation of cell proliferation, differentiation, adhesion, migration, cell survival, apoptosis, and senescence . At least part of CCN2’s biological activity is mediated through interaction with a host of other proteins and receptors, by which it can modify their signaling activity and cross-talk . Therefore, it will be challenging to fully explore the potential roles of CCN2 in megakaryocytes of normal BM (although the expression is low), of the subgroup of MPN with overexpression, and beyond. In summary, in BM biopsies, we observed variable CCN2 expression in megakaryocytes, a cell type increasingly recognized for its importance in the regulation of the BM microenvironment, including the established role of mutant megakaryocytes in MPN promoting myeloproliferation and fibrosis. In MPN, remarkable CCN2 overexpression was detected in a subgroup (13%). CCN2 expression, however, did not correlate with fibrosis or other disease parameters, neither in the whole study group nor in the subgroup with CCN2 overexpression in megakaryocytes. Our data suggest that CCN2 is not a key driver of myelofibrosis in MPN, and that noncanonical, CCN2 independent, rather than canonical TGF-β signaling might be responsible for the development of fibrosis in MPN.
|
Effects of direct and conventional planting systems on mycorrhizal activity in wheat grown in the Cerrado | 5a16da3c-c353-4e40-b114-a0f08dd6e245 | 11493960 | Microbiology[mh] | Arbuscular mycorrhizal fungi (AMF) occur naturally in soils and, when associated with plants, increase their capacity to absorb nutrients from the soil, resist attack by pathogens in the root system, increase the water absorption capacity , and play an important ecological role in nutrient cycling. AMF synthesize a glycoprotein called glomalin that has a gluing function , increasing the stability of aggregates in the soil , . The easily extractable glomalin-related soil protein (EE-GRSP) is a hydrophobic, recalcitrant, and thermostable glycoprotein that may be produced by soil organisms, particularly by arbuscular mycorrhizal fungi that cement the soil , and it is associated with stable macro- and microaggregates in the soil , , . The cementing property of the EE-GRSP helps to join soil particles, which favors the formation of stable aggregates. In addition, this protein adsorbs heavy metals, reducing the availability and risk of toxicity of these elements to organisms and plants in polluted soils , . AMF can be associated with numerous plant species, including wheat. This crop is one of the most cultivated grains in the world, and according to Conab 2021, it is grown mainly in southern Brazil. However, the Central Brazil region is a great alternative for the expansion of wheat production, both under rainy (out of season) conditions and under irrigated systems , . Wheat crop expansion to the Brazilian Cerrado region may contribute to the increase in the production of this important cereal. The Brazilian Cerrado is currently the main agricultural production frontier in Brazil and worldwide, although this region has some production limitations, such as seasonal rain and low soil fertility , – . Despite having several materials cultivated in the Cerrado , little is known about their association with arbuscular mycorrhizal fungi that occur naturally in the soil . There is, therefore, a gap in knowledge about the occurrence of AMF species in soil cultivated with wheat in the Cerrado. Only one study from Brazil evaluated changes in the diversity of AMF species in soil cultivated with various wheat genotypes . Wheat may be cultivated in conventional and no-tillage systems, and tillage, cover cropping, and crop succession can alter the indigenous AMF community structure and diversity in soil and roots , . In general, intensive changes in edaphic ecosystems reduce the abundance and diversity of arbuscular mycorrhizal fungi, and sustainable and conservationist management practices, such as no-tillage, have a positive effect on the diversity of arbuscular mycorrhizal fungi and other beneficial organisms in the rhizospheres of cultivated systems – . Soil management can promote changes in the quantity and predominance of mycorrhizal species in the soil. Management practices also promote different changes in species diversity, which are present in the native arbuscular mycorrhizal community and interfere with symbiotic efficiency , . In addition, genetic variability has been observed between wheat genotypes associated with arbuscular mycorrhizal fungi and wheat cultivars, which alters the species diversity of these fungi in the rhizosphere region . In management systems in which plants are colonized by AMF, minimal losses of nutrients occur. Mycorrhizal roots have greater longevity and absorb more water and nutrients; in addition, they are highly important in Cerrado soils when the fertility level is low and the soils are acidic , , – . No-tillage is one of the main conservationist management systems used in Brazil, promoting improvements in soil quality associated with greater aggregate stability, increasing soil moisture and organic matter levels , , and increasing the diversity of arbuscular mycorrhizal fungal species in the soil . On the other hand, under conventional tillage, chisel plowing to the 20 cm soil layer occurs and promotes a decrease in arbuscular fungi in the soil . Thus, the objective of this work was to evaluate mycorrhizal colonization, spore density, soil glomalin content and species diversity in five wheat genotypes under no-tillage and conventional tillage systems.
We conducted the experiment at Embrapa Cerrados, Planaltina, Federal District (15°35’30” S, 47°42’30” W, at an altitude of 1,007 m). The soil was classified as a typical Oxisol according to the FAO . According to the Köppen classification, the climate is seasonal tropical (Aw) , characterized by two well-defined seasons (dry and rainy), with periods of drought during the rainy season. The average annual precipitation ranges from 1,400 mm to 1,600 mm in this region, and the average annual air temperature varies between 15 °C and 27 °C. The area was cultivated with Brachiaria ( Urochloa ruziziensis ) for ten years, and from 2005, there was crop rotation with wheat in the winter (May–October), soybean in the summer (October–January), and common bean in the off-season (January–April). Before wheat planting, Crotalaria ( Crotalaria juncea ) was planted in January 2011 and harvested in March 2011. In half of the experimental area, straw was incorporated with a plow harrow in the 0–20 cm soil layer, and in the other half, crop residues remained on the soil surface without being incorporated into the soil in a no-tillage system. The results of the soil analysis before planting in the experiment are presented in Table . The experimental design was randomized blocks with three replications and a subplot scheme. The plots were composed of two management systems, no-tillage and conventional tillage, and the subplots were composed of five wheat genotypes: ‘Aliança’, ‘Brilhante’, ‘BRS 264’, ‘PF020037’, and ‘PF020062’. Brilhante is drought tolerant and is the only studied genotype without aristas. Brilhante, a drought-tolerant rainfed biotype material, BRS404, a wheat cultivar that was launched in 2015 and is suitable for rainfed cultivation in Central Brazil, PF 080492, is classified as a rainfed cultivation material in southern Brazil, but in the Midwest Region, it has demonstrated suitability for rainfed plants, PF020037; this line was developed for rainfed plants and has intense waxing on leaves and stalks, a natural mechanism of drought tolerance . The seeds used were obtained from Embrapa Cerrados itself. The experiment was carried out during the winter season (May–October) under irrigation, as during this period, there is almost no rain in the Cerrado region . In May 2011, we planted five wheat genotypes mechanically in five rows per plot for each genotype, with a spacing of 20 cm between the rows. Seeds were planted at a depth of 3 cm and a density of 300 seeds/m². At the tillering stage, 400 kg ha −1 N-P 2 O 5 -K 2 O (4–14–8) and 100 kg ha −1 urea were used for wheat genotype fertilization. The area of each experimental unit was 4 m 2 . Irrigation depths of 150 mm were applied until tillering, and a total of 300 mm was applied until the date of root and soil collection at flowering of the wheat genotypes. During the experiment, the total natural precipitation was 5.8 mm, the maximum temperature was 34 °C, and the minimum temperature was 8.3 °C. The relative air humidity was 54%, according to data extracted from the Agroclimatological Station of the Embrapa Cerrados. During the flowering of the wheat genotypes, forty-five days after seedling germination, the soil and roots from each plot were randomly collected. In each experimental unit, roots from five plants (5 g per plot) and five subsamples of 50 g of soil around the roots were collected at a depth of 0–20 cm. The composite soil samples were homogenized and stored under refrigeration at 10 °C. The roots were subsequently washed and preserved in 50% alcohol. For the AMF colonization rate, we used Phillips and Hayman’s method to clarify the roots with 0.1 mol/L KOH in a water bath, and the AMF structures were stained with trypan blue. The AMF colonization rate was calculated via the gridline intersect method with a stereoscopic microscope according to the methods of Giovannetti and Mosse . Mycorrhizal soil spores were extracted via the wet sieving method , with some adaptations. The soil sample was shaken in a blender with tap water for 30 s, and after soil decantation, the suspension was transferred to sieves with 1000 and 45 μm mesh. The soil particles retained on the 45 μm sieve were placed in centrifuge tubes and centrifuged at 3000 rpm for three minutes. The supernatant was discarded, and another centrifugation was performed at 2000 rpm for three minutes with 50% sucrose solution. The supernatant was placed in a channel plate, and spore counting was performed via a stereoscopic microscope. Arbuscular mycorrhizal species were identified and separated on the basis of their morphological characteristics, such as wall, spore size, color, and spores, according to their morphotype. The spores of AMF were placed on slides with pure polyvinyl glycerol (PVLG) and PVLG mixed with Melzer (1:1 v/v). The identification of mycorrhizal fungi species was carried out at the Mycorrhizal Fungi Laboratory of Embrapa Agrobiologia with the aid of an optical microscope and the identification of mycorrhizal fungi was carried out through the morphology of the spores. To assist in the identification of mycorrhizal fungal species, we used original studies of the species descriptions and descriptions of the species available on the website from the “International Collection of Arbuscular and Vesicular-Arbuscular Mycorrhizal Fungus Cultures” . The easily extractable glomalin-related soil protein (EE-GRSP) was extracted according to Wright and Upadhyaya and quantified via the Bradford test with absorbance at 595 nm. The spore number, mycorrhizal colonization and easily extractable glomalin-related soil protein data were subjected to analysis of variance, and the comparison of means was performed via Tukey’s test via the Assistat program . Principal component analyses were performed via the PAST program . The experimental research and collection of plant material complied with all institutional, national and international guidelines and legislation for conducting scientific, ethical and biosafety research.
In all the studied genotypes, mycorrhizal colonization was greater in the no-tillage system than in the conventional tillage system (Table ), indicating that the wheat genotype affected the susceptibility of AMF root colonization to changes in the tillage system. No-tillage offered better conditions for mycorrhizal fungal development in the first year after implantation. In the soil under no-tillage, ‘Aliança’, ‘Brilhante’, and ‘BRS 264’ presented statistically similar mycorrhizal colonization values, which were 61, 59, and 61%, respectively. The PF020062 and PF020037 genotypes presented less mycorrhizal colonization (approximately 54%). Under conventional tillage, the PF020062, BRS 264, and PF020037 genotypes presented greater mycorrhizal colonization (42, 44, and 45%, respectively). As with mycorrhizal colonization (Table ), the number of AMF spores in the five cultivated wheat genotypes was greater in the no-tillage system than in the conventional system, generally presenting twice the number of spores (Table ). In the no-tillage system, PF020062 and Brilhante had greater numbers of spores than did the other treatments did (247 and 238 AMF spores from 50 g −1 of soil, respectively) (Table ). Under conventional tillage, in addition to these genotypes, ‘PF020037’ also presented a greater number of spores. For the easily extractable glomalin-related soil protein (EE-GRSP), there was no significant difference between the studied wheat genotypes. However, greater amounts of EE-GRSP were obtained in the no-tillage system than in the conventional tillage system (Table ). In the identification of AMF species found in the soil under different wheat genotypes and management systems, the most common species were Acaulospora scrobiculata , Sieverdingia tortuosa , Glomus macrocarpum and Cetraspora pellucida (Table ). In terms of species diversity, the composition of the AMF community differed among the wheat genotypes and agricultural management systems, with 15 AMF species found. For species, no-tillage resulted in similar richness to conventional tillage, with 12 species out of 15 being found in the experimental area. The most frequent species was Acaulospora scrobiculata , which was present in 9 of the 10 treatments (management system and wheat genotypes), and Sieverdingia tortuosa and Glomus macrocarpum were found in all the genotypes in both management systems. Most species of the genus Acaulospora were found in both management systems, except for Acaulospora tuberculata , which was not detected in the no-tillage system. Acaulospora laevis occurred in both management systems, was associated with ‘BRS 264’ and ‘PF020037’ and was identified only under no-tillage in ‘PF020062’. A. scrobiculata occurred in all the genotypes and management systems except ‘PF020062’ under conventional planting. Acaulospora denticulata was obtained only in ‘PF020062’ under conventional tillage and in no-tillage in ‘Brilhante’ and ‘PF020037’. The species A. tuberculata only occurred in ‘PF020062’ under conventional tillage. The species Acaulospora foveata occurred only under NT and A. tuberculata under CT and only in the rhizosphere of the ‘PF020062’ genotype. Principal component analysis of the frequency of AMF species in the no-tillage and conventional tillage, Aliança, Brilhante, BRS 264, PF020037, and PF020062 genotypes and the species A. denticulata , A. tuberculate , A. foveata , A. laevis , Am. leptoticha , R. microaggregatum , C. lamellosum , C. gregaria , and R. persica , did not correlate with soil management or wheat genotype (Fig. ). The species C. pellucida , S. clavispora , S. tortuosa , G. macrocarpum , A. scrobiculata , and Gigaspora sp. are more closely related to the studied genotypes under conventional and no-tillage systems. Gigaspora sp. was found in the soils associated with ‘Aliança’, ‘Brilhante’, and ‘PF020062’ under conventional tillage and under no-tillage in ‘PF020037’ and ‘PF020062’. Racocetra gregaria occurred only in ‘Aliança’ under no-tillage. Racocetra persica occurred in the conventional tillage system only in Brilhante and in the no-tillage systems in Brilhante, BRS 254, and PF020062. Cetraspora pelucida occurred in all the genotypes under no-tillage and conventional tillage, and this species was not found in the soil under ‘Brilhante’. Principal component analysis revealed the correlation of the wheat genotypes analyzed with the presence of arbuscular mycorrhizal fungal species. The frequency of AMF species present in the rhizosphere of the genotypes ‘Aliança’, ‘Brilhante’, ‘BRS 264’, ‘PF020037’ and ‘PF020062’ demonstrated the existence of an approximation between the species A. scrobiculata , A. tuberculata , G. macrocarpum , C. pellucida , and Giaspora sp. and the genotypes studied. The occurrence of A. denticulata in soil under conventional tillage occurred only in association with ‘PF020062’, and G. clavispora was more closely related to the ‘Aliança’ and ‘Brilhante’ genotypes (Fig. ). Table presents the Shannon and Simpson diversity index data. The values of the diversity index were similar when the diversity and richness of species identified in wheat genotypes in conventional cultivation and direct planting systems were evaluated.
AMF occupy an important ecological niche in ecosystems, and soil management practices such as plowing and fertilization influence them, which may reduce the incidence of some AMF species , . According to Higo et al. , the development of Mycorrhizal fungal species and their root colonization occur in the 0–20 cm layer, and changes in the soil due to management practices negatively influence the quantity and diversity of AMF species. Conventional soil preparation (plowing and harrowing), due to soil mobilization and hyphal rupture, causes a reduction in AMF inoculum and exposes hyphae, spores and colonized roots to adverse conditions, such as high temperatures and predatory organisms , . On the other hand, the direct planting system promoted an increase in the organic matter content of the soil, greater root colonization and an increase in the number of AMF spores in the soil. Furthermore, no-tillage systems are related to the formation and stability of aggregates in the soil, which are associated with the production of glomalin by AMF – , – , – . In this study, mycorrhizal colonization in all the wheat genotypes was sensitive to the impact of soil disturbance caused by conventional tillage – . In general, management actions that promote some type of stress or discomfort to cultivated plants can stimulate mycorrhizal activity in the roots . However, the ‘Aliança’ and ‘Brilhante’ genotypes were more affected than the other wheat genotypes were, which caused a decrease in the number of spores in the rhizosphere region. It is also possible that root development and architecture may differ between wheat genotypes and that AMF root colonization may increase plant resistance to toxic levels of Al , especially in Oxisols. Furthermore, it is possible that wheat genotypes have different root development capabilities under conventional tillage, leading to a reduction in AMF colonization. Cerrado soils have the nutritional characteristics of low fertility, low CTC, low levels of organic matter and high levels of aluminum , . These characteristics impose a stressful condition on the plant that contributes to increased mycorrhizal activity. in the soil and roots. Conservationist cultivation systems such as the direct planting system improve environmental conditions for plant growth and development, which ultimately reflects the activity of fungi in the soil , , . Different genotypes of the same plant species, such as wheat in this study, may differ in their root exudation, which may reflect interactions among plants and microorganisms , , which can promote changes in species diversity in the rhizosphere and interfere with the association with species of mycorrhizal fungi. Azcon and Ocampo analyzed the colonization rate of thirteen wheat genotypes inoculated with Funneliformis mosseae before being classified as Glomus mosseae ; colonization rates ranged between zero for Negrillo and Champlein and 38% for Lozano , and the authors reported that root colonization was affected by the production of exudates in the rhizosphere, which influenced colonization. This may explain the differences in AMF root colonization between cultivars within the same tillage system. Under different soil management systems, Hildermann et al. . studied ten wheat genotypes and mycorrhizal colonization, which was approximately 27%, with no significant difference between the genotypes studied. This work revealed that mycorrhizal colonization in wheat genotypes under the two soil management systems varied between 40% and 61%, which was almost double that reported by the authors cited above. Furthermore, root colonization can be altered according to the crop cycle, as reported by Taibi et al., 2021 , indicating that root colonization by AMF is dynamic throughout the crop cycle. Cover crops can also positively influence the number of spores in the wheat rhizosphere – . Brito et al. . evaluated the influence of cover crops for two years on spore density in wheat, and in the treatments without soil mobilization, the density values were greater than those in the treatments with soil mobilization. As the no-till system depends on the cultivation of cover crops before the main crop is planted, this dynamic may be responsible for the greater diversity of AMF in this management system , , – . As AMF are organisms with cosmopolitan ecological behavior and have the ability to attach themselves to more than one plant species, systems that promote a greater diversity of plants throughout the year will contribute to greater maintenance of the richness of AMF species in soils , , . Thus, both systems that are more interesting for the development of AMF, as well as wheat genotypes with greater compatibility with AMF, can be selected in the future. Easily extractable glomalin-related soil protein (EE-GRSP) is a hydrophobic, recalcitrant, and thermostable glycoprotein that can be produced by soil organisms, particularly arbuscular mycorrhizal fungi, and is associated with stable macro- and microaggregates in soil – . The bonding property of EE-GRSP favors the formation of stable aggregates – . Furthermore, this protein adsorbs heavy metals, reducing the availability and risk of toxicity of these elements for organisms and plants in polluted soils , . The EE-GRSP values varied between crops and soil management practices. Truber and Fernandes reported that EE-GRSP values are similar between 1.12 and 1.24 mg g−1 of soil for several crops grown after sugarcane. In areas with different crops and undisturbed areas, values between 6.51 and 10.56 mg g−1 were obtained . Furthermore, Wilkes et al. (2021) reported higher EE-GRSP values under direct planting than under conventional tillage, and a strong correlation was detected between EE-GRSP and water-stable aggregates – . Although it is not possible to state that the glomalin identified in the samples is in fact produced by the action of AMF, there is a correlation between the action of fungi and the levels of glomalin extracted , . The presence of cover crops is an indicator of the availability of carbohydrates for AMF, an aspect that, according to Rillig et al. . possibly promoted higher concentrations of glomalin in grasses and shrubs than in uncovered areas in the Mediterranean . Similarly, in a semiarid ecosystem in North America, Bird reported similar results, in which areas with cover crops presented relatively high concentrations of EE-GRSP. According to these authors, soils under plant canopies accumulate more organic matter and are less exposed to disturbances, which promotes better conditions for fungal growth and EE-GRSP production. In Cerrado soils, the level of EE-GRSP in a crop‒livestock integration system was 2.3 mg/kg . Soil management and different wheat cultivars affected the presence of mycorrhizal fungal species associated with the wheat cultivars (Table ). According to Mao et al. , wheat cultivars can regulate the colonization and community of mycorrhizal fungal species in the rhizosphere region. The identification of AMF species through spore morphology has been an important tool for detecting changes in species diversity . On the other hand, other authors have detected a greater number of species using molecular tools to identify AMF species, as reported by Jansa et al. . In the present work, management systems and wheat genotypes altered AMF species diversity, and this diversity appears to be closely associated with the wheat genotype. The use of several wheat genotypes promoted changes in the diversity of AMF in the rhizosphere, as described by Méndez et al. . reported twelve AMF species associated with wheat genotypes in the Cerrado region. The species Acaulospora scrobiculata was associated with all the wheat genotypes (Brilhante, PF020037, BRS404 and PF080492). These results are similar to those of our study, as this species was associated with all wheat genotypes in both management systems, with the exception of PF020062. These results indicate that this species is promising for selection for other studies with wheat in the Cerrado region. Higo et al. . carried out a study with three wheat genotypes in which five species of the genus Glomus; three species of the genus Gigaspora; two species of the genus Acaulospora; and one species each of the genera Funneliformis and Rhizoglomus (before being classified as Rhizophagus) , Racocetra , Claroideoglomus , Diversispora and Sclerocystis were identified . These authors also reported that soil management with cover crops positively influences the AMF population and that more conservationist systems, such as direct planting, promote greater mycorrhizal colonization, a greater number of spores and greater species diversity – . Angelini et al. . reported 19 species within the genera Acaulospora , Archaeospora , Glomus and Scutellospora in conventional and direct planting systems cultivated with corn ; however, the total number of species found in the no-tillage system was greater than that found in the soil under the conventional system. However, Gai et al. . studied the composition of the mycorrhizal community in soils under direct and conventional tillage systems and reported no significant differences in the frequency of AMF species.
Compared with those under no tillage, all the genotypes under conventional tillage presented a decrease in mycorrhizal colonization, spore number in the rhizosphere, and easily extractable glomalin-related soil protein. The AMF community composition differed among the wheat genotypes and management systems. No-tillage had a similar richness to conventional tillage, both with twelve species. The most frequent species were A. scrobiculata , S. tortuosa , and G. macrocarpum , which were found in all the genotypes in both cultivation systems. The results indicated that wheat varieties can be selected for soils of Cerrados that can make optimum use of arbuscular mycorrhizal fungi for nutrient uptake and yield formation. To confirm this, field trials with different wheat varieties should be conducted to investigate the relationships among AMF root colonization, AMF species distribution and yield.
Experimental research and field studies on cultivated plants, including the collection of plant material, comply with the required institutional, national and international guidelines and legislation. The seeds used in the experiment are free to use, and their varieties are described in the methodology section. The experimental area used was the university where the study was carried out.
|
Mitotic WNT signalling orchestrates neurogenesis in the developing neocortex | 580154df-1d72-4268-85ca-35495086acf0 | 8488556 | Anatomy[mh] | During embryonic/foetal development, the mammalian neocortex undergoes a large increase in surface area and a drastic expansion of the number of projection neurons necessary for higher cognitive functions (Caviness et al , ; Rakic, ; Rakic, ; Lui et al , ; Florio & Huttner, ; Sun & Hevner, ). Crucial to this increase are the neural progenitor cells (NPCs) in the developing neocortex. The primary population of NPCs reside in the ventricular zone (VZ), adjacent to the ventricular lumen, and during the initial stages of neocortex development undergo several rounds of self‐amplifying symmetric divisions (Malatesta et al , ; Noctor et al , ; Götz & Huttner, ), commonly referred to as increased self‐renewal (see note on terminology in Methods). With the onset of cortical neurogenesis, these NPCs, known as “apical progenitors” (APs), begin to divide asymmetrically to generate either post‐mitotic neurons (direct neurogenesis, a minor pathway in mammals) or a secondary population of NPCs termed “basal progenitors” (BPs, indirect neurogenesis, the major pathway in mammals) (Kriegstein & Götz, ; Götz & Huttner, ; Taverna et al , ). In contrast to APs that undergo mitosis at the apical surface, BPs undergo mitosis basal to the VZ in a second germinal zone known as the subventricular zone (SVZ). Depending on the mammalian species, BPs either divide to give rise to two neurons (also referred to as neuronal differentiation of BPs), which is typically the case in embryonic mouse neocortex, or first self‐renew to increase their pool size and then generate neurons, which is the case for foetal human neocortex (Lui et al , ; Florio & Huttner, ; Taverna et al , ; Molnár et al , ). Maintaining the balance between symmetric self‐renewing AP divisions and asymmetric BP‐genic AP divisions is one critical basis for generating the correct number of neurons in the neocortex (Huttner & Kosodo, ; Delaunay et al , ). In this context, cell signalling pathways are crucial to regulate neurogenesis, but pinpointing their precise functions (e.g. in NPC self‐renewal vs. generation of differentiated cells, in which cell types, at which developmental stages) has proven challenging, and, at times, controversial (Taverna et al , ). A case in point is WNT signalling, a conserved pathway that is intricately linked with neocortex development. The overall role of WNT signalling in neurogenesis is complex and highly dependent upon the models used, as well as the epistatic level at which the pathway is manipulated (Harrison‐Uy & Pleasure, ). For instance, genetic ablation of β‐catenin , the signal transducer driving the transcriptional response of canonical WNT signalling, leads to increased cell cycle exit of NPCs and premature neuronal differentiation (Machon et al , ; Woodhead et al , ; Mutch et al , ). Conversely, overexpression of constitutively active β‐catenin or deletion of glycogen synthase kinase 3 (GSK3), a key negative regulator of canonical WNT signalling, drive increased AP self‐renewal at the expense of BP and post‐mitotic neuron generation (Chenn & Walsh, ; Machon et al , ; Wrobel et al , ; Kim et al , ). Hence, it is generally thought that the primary role of canonical WNT signalling is to promote NPC self‐renewal, a phenomenon that is also observed in other parts of the developing nervous system and is mediated by the transcriptional activity of β‐catenin (Zechner et al , ; Gulacsi & Anderson, ; Draganova et al , ). However, in vitro studies have demonstrated that WNT/β‐catenin, through transcriptional regulation of N‐myc and the neurogenic transcription factors Ngn1/2, can also promote differentiation of NPCs (Hirabayashi et al , ; Israsena et al , ; Kuwahara et al , ). Wnt7a and Wnt7b promote NPC proliferation (Viti et al , ; Qu et al , ). In contrast, mice mutant for LRP6 (low‐density lipoprotein receptor‐related protein 6), the principle co‐receptor for the canonical pathway, exhibit normal NPC proliferation but decreased neuronal differentiation (Zhou et al , ). Further complicating issues, expression of Wnt3a in the neocortex of mouse embryos by in utero electroporation leads to both AP self‐renewal and neuronal differentiation of BPs (Munji et al , ). Taken together, the precise role of WNT signalling in mouse neocortex development remains unclear. Does canonical WNT signalling promote self‐renewal or differentiation of NPCs? One solution to the observed discrepancies could be that WNT signalling regulates both progenitor self‐renewal and differentiation but in different populations of NPCs, namely self‐renewal in APs and differentiation in BPs (Munji et al , ). However, another possible explanation for the apparent inconsistencies may be that manipulation of WNT signalling components can exert different effects depending on whether the manipulated WNT effectors function upstream or downstream in the pathway. This is because WNT/LRP6 signalling triggers various sub‐pathways that are differentially affected by manipulation at the ligand, receptor, or intracellular levels (Acebron & Niehrs, ; García de Herreros & Duñach, ). The main function of LRP6‐dependent WNT signalling is to inhibit GSK3, and it is still widely assumed that the only relevant GSK3 substrate in the WNT pathway is β‐catenin (Nusse & Clevers, ). However, in addition to β‐catenin, GSK3 can phosphorylate many other proteins and target them for proteasomal degradation (Taelman et al , ; Acebron et al , ). In this context, a novel player is the WNT/STOP (WNT‐ st abilization o f p roteins) pathway (see Fig for a model), which acts post‐transcriptionally, is independent of β‐catenin, peaks during mitosis, and slows down degradation of numerous proteins as cells prepare to divide (Acebron et al , ). Key effectors of WNT/STOP signalling are cyclin Y (Ccny) and cyclin Y‐like 1 (Ccnyl1), conserved cyclins that together with their cyclin‐dependent kinases (CDK) 14 and 16 phosphorylate and activate the WNT co‐receptor LRP6 in G2/M. This co‐receptor activation leads to a peak of WNT signalling and GSK3 inhibition in mitosis (Davidson et al , ). In vivo, the WNT/STOP pathway plays a role in germ cells (Huang et al , ; Koch et al , ) and cancer cells (Madan et al , ; Hinze et al , ). Here, we examine the in vivo role of WNT/STOP signalling in NPCs during multiple phases of mouse neocortex development. Genetic ablation of Ccny and Ccnyl1 leads to a thinner cerebral cortex and a reduced number of BPs and post‐mitotic neurons. Importantly, Ccny / l1 ‐deficient mice display decreased WNT signalling at the receptor level, but not at the transcriptional level. Through a series of in vivo and in vitro analyses, we show that WNT/STOP signalling is essential for asymmetric AP division, cell cycle progression of BPs and neuron generation. Mechanistically, Ccny / l1 stimulate neuronal differentiation of BPs through post‐transcriptional regulation of Sox4 and Sox11, two essential neurogenic transcription factors (Bergsland et al , ; Chen et al , ) that we identify as direct GSK3 targets. We therefore propose that WNT/STOP signalling is the primary driver of neuronal differentiation of NPCs whereas WNT/β‐catenin signalling predominantly regulates NPC self‐renewal.
WNT/STOP signalling is required for neurogenesis in the embryonic mouse neocortex To study the role of WNT/STOP signalling in the developing mouse neocortex, we generated embryos deficient for Ccny and Ccnyl1 (hereafter referred to as double knockout (DKO) embryos). In contrast to individual Ccny or Ccnyl1 ‐deficient embryos, which are both viable (An et al , ; Koch et al , ), DKO embryos displayed in utero death beginning at embryonic day 14.5 (E14.5). To avoid that the results of our analyses of DKO embryos might reflect non‐specific effects resulting from early lethality, we analysed DKO and littermate control embryos at E13.5. Analysis of haematoxylin–eosin (HE)‐stained DKO forebrains revealed a significantly thinner neocortical wall (−32%, P = 0.0004) compared to littermate controls. Mediolateral neocortex length was not significantly changed (Fig ; quantified in Appendix Fig ). To further dissect the reduction in the thickness of the neocortical wall observed in E13.5 DKO embryos, we measured VZ, SVZ and intermediate zone (IZ) plus cortical plate (CP) thickness upon DNA staining and immunofluorescence microscopy (IF) for T‐box brain protein 2 (Tbr2), a BP marker that permitted visualization of the SVZ. This revealed that within the thinner neocortical wall of DKO embryos, the proportion of neocortical wall thickness constituted by the VZ was increased by 12%, whereas that constituted by the SVZ and IZ+CP was decreased by 20% and 18%, respectively, when compared to the respective proportions in the thicker neocortical wall of control embryos (Fig ; Appendix Fig ). To corroborate these observations, we quantified the number of APs, BPs and post‐mitotic deep‐layer neurons upon IF for their respective markers Paired box protein 6 (Pax6), Tbr2, and T‐box brain protein 1 (Tbr1). The percentage of Pax6 + cells in the thinner neocortex of E13.5 DKO embryos was slightly increased (+11%, P = 0.04) whereas the percentages of Tbr2 + and Tbr1 + cells were decreased (−38%, P = 0.0006; −25%, P = 0.01; respectively) when compared to the thicker neocortex of control embryos (Fig ). Consistent with this, IF for βIII‐tubulin (Tuj1), which marks newborn neurons, revealed that in the thinner neocortical wall of E13.5 DKO embryos, the layers containing newborn neurons comprised a lesser proportion of the neocortical wall thickness than in the thicker neocortical wall of control embryos (control 50.9 ± 5.5 vs. DKO 33.2 ± 1.0 (% of total cortex thickness), P = 0.048; Fig ). Together, these data indicated that the reduction in neocortical wall thickness in E13.5 DKO embryos was primarily due to a decrease in the levels of BPs and newborn neurons, which in turn suggested that cortical neurogenesis is reduced in DKO embryos. Decreased BP and, consequently, post‐mitotic neuron levels can be due, at least in part, to structural defects in neocortical cytoarchitecture, such as improper organization of the radial glial scaffold and disruption of apical‐basal polarity, which may impede migration of BPs to the SVZ and of newborn neurons to the CP (Taverna et al , ). However, IF against the radial glia‐specific intermediate filament marker nestin revealed no overt abnormalities in the radial glial scaffold of E13.5 DKO neocortex when compared to control (Fig ). Also, we observed normal enrichment of β‐catenin at the apical cell cortex of DKO forebrains, suggesting that apical–basal polarity was not affected in the absence of Ccny/l1 (Fig ). We conclude that DKO embryos display neurogenesis defects but no major structural abnormalities in the embryonic neocortex. We performed IF for Ccny and Ccnyl1 in the E12.5‐13.5 mouse neocortex. Interestingly, both Ccny and Ccnyl1 immunoreactivity was concentrated at the apical cell cortex/apical plasma membrane of the VZ of control embryos (Fig ). No Ccny or Ccnyl1 immunoreactivity was detected in the neocortex of DKO embryos, confirming that the two polyclonal Ccny and Ccnyl1 antibodies used are specific (Fig EV1E and F). Ccny and Ccnyl1 immunoreactivity was also detected in, respectively, 28 ± 0.63% and 24 ± 0.68% of Tbr2 + BPs in the SVZ (Fig ). Ccny/l1 were not detected in post‐mitotic neurons. To further analyse the distribution of Ccny/l1 protein in APs and BPs, we performed in utero electroporation (IUE) in E13.5 embryos with a plasmid coding for green fluorescent protein (GFP) and analysed embryos at E15.5. Triple IF for Ccny/l1, the AP marker Sox2 (SRY (sex determining region Y)‐2), and GFP confirmed Ccny/l1 immunoreactivity at the apical membrane of electroporated APs (Fig ), while triple IF for Ccny/l1, Tbr2 and GFP revealed Ccny/l1 immunoreactivity as single puncta in BPs (Fig ). In light of these observations, we next performed IF with an LRP6 antibody specific for the casein kinase 1 gamma (CK1γ) phosphorylation site T1479, which marks active WNT signalling (Davidson et al , ). Similar to the Ccny and Ccnyl1 immunoreactivity, phospho‐T1479 LRP6 immunoreactivity was also found to be concentrated at the apical cell cortex / apical plasma membrane of the VZ of E13.5 control mouse neocortex (Fig ). This concentration reflected the specific enrichment of phospho‐LRP6 immunoreactivity in 91% of mitotic APs analysed (Fig , arrowheads, inset). The apical concentration of phospho‐LRP6 immunoreactivity in mitotic APs is consistent with the fact that mitosis of APs typically occurs at the ventricular surface of the neocortex (Taverna et al , ). Phospho‐T1479 LRP6 immunoreactivity was also observed in 48% of mitotic BPs of E12.5 control mouse neocortex (Fig , inset). To confirm this pattern of immunostaining, we performed IF on control embryonic mouse neocortex with another phospho‐LRP6 antibody that detects the CDK14 priming phosphorylation site S1490 (Davidson et al , ). Again, phospho‐S1490 LRP6 immunoreactivity was enriched in E13.5 mitotic APs (90%) and BPs (88%), with the mitotic stage of these progenitors being confirmed by co‐immunostaining with the mitotic marker phospho‐histone H3 (pHH3) (Fig , insets and arrowhead). Mitotic APs also showed Ccny (30%) and Ccnyl1 (29%) immunoreactivity (Fig ). IF for total LRP6, CDK14 and GSK3β revealed the greatest concentration of immunoreactivity at the apical cell cortex / apical plasma membrane of the VZ of E13.5 control mouse neocortex (Fig ). Altogether, these data suggest that the core components of the WNT/STOP signalling pathway are expressed in the embryonic mouse neocortex and that WNT/LRP6 signalling peaks during mitosis in APs and BPs. To determine whether WNT signalling in DKO forebrains was affected at the receptor level, we quantified the immunostaining intensity for active phospho‐LRP6 (T1479) within mitotic APs and BPs of the E13.5 neocortex and detected a decrease (−15%, P = 0.01) when compared to controls (Fig ). Phosphorylation of LRP6 at S1490 was even more markedly reduced (−45%, P = 0.001) when analysed by immunoblotting of protein lysates extracted from E13.5 DKO and control forebrains (Fig ). To monitor canonical WNT signalling, we performed RNAScope analysis on sections of E13.5 neocortex using a probe against the β‐catenin target gene Axin2 . Axin2 expression was not significantly changed in the neocortex of DKO embryos when compared to controls (Fig ). To confirm this result, we extracted RNA from E13.5 dorsal forebrains and performed qPCR analysis. Expression of Axin2 and N‐myc , another WNT target gene in the neocortex (Kuwahara et al, ), was not significantly altered in DKO dorsal forebrains ( Axin2 , P = 0.95; N‐myc, P = 0.41) (Fig ). Furthermore, immunoblot analysis of whole forebrain lysates probed with an antibody against dephosphorylated β‐catenin, which represents its active form, showed no significant change in E13.5 DKO embryos (Fig ). We conclude that combined Ccny / l1 deficiency in the embryonic mouse neocortex leads to decreased LRP6 receptor activation without changes in β‐catenin activity. Together with the reduction in neuron levels in the DKO neocortex, these data are consistent with WNT/STOP signalling being required for neurogenesis in the embryonic mouse neocortex. DKO embryos show delayed cell cycle progression and increased mitosis length in BPs In light of the reduced levels of BPs and neurons in the neocortex of DKO embryos, we analysed cell cycle parameters of APs and BPs in control and DKO embryos, as alterations in cell cycle progression have been shown to affect NPC fate and cortical neurogenesis (Götz & Huttner, ; Dehay & Kennedy, ; Arai et al , ; Borrell & Calegari, ). To label NPCs in S‐phase in the neocortex of control and DKO embryos, we injected the thymidine analog bromo‐deoxyuridine (BrdU) at E11.5, E12.5 and E13.5 and sacrificed mice 1 h later. Co‐IF for Pax6, Tbr2 and BrdU did not reveal any major difference between control and DKO neocortex in the proportion of APs (Pax6 + Tbr2 − ) that were in S‐phase (i.e. BrdU + ) at E11.5 and E12.5, although a slight increase was detected for DKO neocortex at E13.5 (Fig ). The average percentage of E13.5 BrdU + APs, i.e. APs in S‐phase, was 31% for control and 39% for DKO neocortex (Fig ). The proportion of BPs (Tbr2 + ) that were in S‐phase (BrdU + ) was moderately decreased in DKO neocortex compared to control at all time points analysed (Fig ). The average percentage of E13.5 BrdU + BPs, i.e. BPs in S‐phase, was 24% for control and 21% for DKO neocortex (Fig ). We next determined the length of S‐phase of APs and BPs in E13.5 control and DKO neocortex. To this end, we performed timed injections of the thymidine analogs iodo‐deoxyuridine (IdU) and BrdU. Briefly, IdU was injected at T = 0 to label APs and BPs in S‐phase, and BrdU was injected at T = 1.5 h to identify those APs and BPs that were still in S‐phase at this time vs. those that had left S‐phase. Embryos were collected 30 min after BrdU injection (Fig , schematic). We then extrapolated from the percentage of APs (Tbr2 − ) and BPs (Tbr2 + ) of control and DKO neocortex that were IdU + but BrdU − , i.e. that had left S‐phase after 1.5 h (Fig , yellow; Appendix Fig ), and determined the time when all control and DKO APs and BPs would have left S‐phase, which yielded the length of S‐phase for control and DKO APs and BPs (Fig ). This revealed a small increase in S‐phase length of E13.5 DKO APs as compared to control APs, and a ≈ 50% increase in S‐phase length of E13.5 DKO BPs as compared to control BPs (Fig ). Knowing the percentage values of E13.5 control and DKO APs and BPs in S‐phase (Fig ), and the length of S‐phase of these types of NPCs (Fig ), allowed us to calculate the total length of the cell cycle of E13.5 control and DKO APs and BPs by dividing the S‐phase length values (Fig ) by the percentage values for these NPCs in S‐phase (Fig ) and then multiplying the resulting numbers with 100 (Fig ). This revealed no difference between E13.5 control and DKO APs, but a nearly doubling of total cell cycle length in DKO BPs as compared to control (Fig ). To calculate the length of mitosis, we first performed co‐IF for pHH3, Ki67 and Tbr2 (Fig ) to determine the percentage of cycling (Ki67 + ) E13.5 control and DKO APs (Tbr2 − ) and BPs (Tbr2 + ) that were in mitosis (pHH3 + ) (Fig ). This revealed no major differences between control and DKO APs, nor between control and DKO BPs (Fig ). We then calculated the length of mitosis of E13.5 control and DKO APs and BPs by multiplying the total cell cycle length values for each of these NPCs (Fig ) with the percentage values for the respective NPC in mitosis (Fig ) and divided the resulting numbers by 100, which yielded the length of mitosis (Fig ). While no significant difference in mitosis length was found between control and DKO APs, mitosis length of DKO BPs was found to be more than doubled as compared to control (Fig ). Data consistent with these findings were obtained when the proportion of APs and BPs in mitosis, deduced from ventricular and abventricular pHH3 + cells, respectively, were compared between E11.5‐E13.5 control and DKO neocortex (Fig ). All cell cycle calculations are detailed in the . Mitotic delay can lead to cell cycle arrest and increased apoptosis (Chen et al , ; Pilaz et al , ). To compare apoptosis levels in DKO vs. control forebrains, we performed IF with an antibody against cleaved caspase 3 (Fig ). Except for a minor, however statistically not significant, increase at E11.5, apoptosis levels in the E12.5 and E13.5 neocortex of DKO embryos were unchanged when compared to controls (Fig ). This was confirmed by TUNEL staining, which at E13.5 was also not significantly altered in the neocortex of DKO embryos (Fig ). Thus, apoptosis is unlikely to explain reduced thickness of the cortical plate in DKO embryos. We conclude that the reduction in DKO embryos of cortical plate thickness involves delayed cell cycle progression and increased mitosis length in BPs. Lack of Ccny / l1 expression reduces asymmetric AP division and neurogenesis in the embryonic neocortex The relative increase in the thickness of the VZ and the reduced BP levels in DKO embryos raised the possibility that their thinner neocortex was not only due to a delayed cell cycle progression of BPs. Instead, the results suggested that Ccny / l1 may affect the generation of BPs from APs, and consequently of post‐mitotic neurons. In the mammalian neocortex, the switch of APs from symmetric, proliferative divisions to asymmetric, BP‐genic divisions is often associated with changes in apical membrane distribution, whereby unequal inheritance of the apical plasma membrane by the daughter cells indicates an asymmetric mode of AP division (Kosodo et al , ; Delaunay et al , ). We quantified the number of symmetrically vs. asymmetrically dividing APs by IF using the “cadherin hole” method, which allows determination of the plane of division in mitotic APs (Fig ) (Kosodo et al , ). Interestingly, E13.5 DKO neocortex displayed a 36% reduction in asymmetric AP division (from 47 to 30% of all AP division, Fig ), consistent with decreased generation of BPs in the absence of Ccny / l1 . Mitotic spindle orientation is critical for determining symmetric vs. asymmetric cell division of NPCs in the developing neocortex (Konno et al , ; Yingling et al , ; Lizarraga et al , ; Asami et al , ; LaMonica et al , ; Xie et al , ; Mora‐Bermúdez & Huttner, ), with a key role of astral microtubules (aMTs) (Mora‐Bermúdez et al , ). A specific subpopulation of aMTs, which reach the apical or basal cell cortex and are referred to as apical‐basal aMTs, are more abundant in symmetrically dividing APs than in asymmetrically dividing APs (Mora‐Bermúdez et al , ). These apical–basal aMTs promote a mitotic spindle orientation perpendicular to the apical–basal axis of APs and reduce the variability of mitotic spindle orientation, which in turn favours symmetric AP division (Mora‐Bermúdez et al , ). Perturbations specifically of these apical–basal aMTs increases asymmetric BP‐genic APs divisions and neurogenesis (Mora‐Bermúdez et al , ). To examine aMT abundance, we performed IF for α‐tubulin, acquired Z‐stack images of dividing APs and quantified the number of apical–basal aMTs vs. the aMTs reaching the central cell cortex (central aMTs) (Fig ). Strikingly, E13.5 DKO APs exhibited an increase specifically in apical‐basal, but not central, aMTs (Fig ), providing a mechanistic explanation for the increase in symmetric AP division (Fig ). We conclude that in dividing APs, lack of Ccny / l1 expression results in an increase in the number of apical–basal aMTs, which reduces asymmetric, BP‐genic cell division and hence neurogenesis in the embryonic neocortex. DKO embryos display premature embryonic lethality, limiting our analysis of neurogenesis to early stages of development in which the cortical plate is less developed. To analyse the effect of Ccny / l1 knockdown in NPCs at later time points, we performed IUE of mouse embryos at E13.5 with plasmids coding for either a control shRNA ( Co ) or shRNAs against Ccny and Ccnyl1 (sh Ccny / l1 ), along with a plasmid coding for GFP. At E15.5, knockdown of Ccny / l1 greatly reduced the proportion of the GFP + progeny of the electroporated cells that were Tbr2 + , i.e. BPs (Fig ). Similarly, at E17.5, the proportion of the GFP + progeny of the electroporated cells that expressed the deep‐layer neuron marker Ctip2 was depleted (Fig ). Altogether, these data corroborate that lack of Ccny / l1 expression reduces neurogenesis in the embryonic mouse neocortex and show that this effect is consistent throughout embryonic development. The IUE results also exclude the possibility that the neurogenesis defect observed in DKO embryos is due to an overall delay in embryonic development, or results from indirect effects related to global deletion of Ccny / l1 . We next asked whether NPCs deficient for Ccny / l1 also display a reduced capacity to generate neurons in vitro . To this end, we isolated NPCs from E13.5 Ccny +/− ; Ccnyl1 −/− forebrains and transduced them with a lentivirus expressing shRNA against Ccny (hereafter referred to as mutant NPCs) to induce acute Ccny / l1 deficiency. NPCs were then cultured in differentiation media to promote neurogenesis. IF for Tuj1 revealed a drastic decrease in the percentage of cells expressing this marker of newborn neurons derived from the mutant NPCs, as well as a significant reduction in neurite length (control 47.7 ± 1.4 µm vs. mutant 23.8 ± 1.8 µm, P = 0.0006; Fig ). The defect in neuron generation was confirmed by qPCR analysis, which demonstrated a major decrease of the neuronal markers Tuj1 and doublecortin in mutant NPC cultures (Fig ). The neural stem cell marker Sox2 showed a slight but not significant increase in mutant cells (Fig ). Levels of phospho‐LRP6 (S1490), marking active WNT receptor signalling, were drastically reduced, whereas Axin2 expression was not significantly altered in the mutant NPC cultures, confirming that the effects of Ccny / l1 during in vitro differentiation are independent of canonical WNT/β‐catenin signalling (Fig ). Furthermore, immunoblotting of total protein lysates from mutant NPCs showed overall strongly increased lysine‐48 ubiquitination (Fig ), indicating globally enhanced protein degradation, a hallmark of impaired WNT/STOP signalling (Taelman et al , ; Acebron et al , ). The results provide further support that Ccny/l1 and WNT/STOP signalling promote asymmetric AP division and neurogenesis in the embryonic neocortex. Sox4/11 are direct GSK3 targets WNT/STOP signalling acts by protecting proteins carrying GSK3 phosphodegrons from proteasomal degradation. To identify downstream targets of WNT/STOP signalling, we screened for effector proteins with a role in neurogenesis and neuronal differentiation that harbour potential GSK3 phosphorylation sites. Two candidates were the SRY ‐related high‐mobility‐group box proteins Sox4 and Sox11 , SoxC subclass transcription factors. Sox4 / 11 display pan neuronal activity and are indispensable for neurogenesis and neuronal differentiation (Bergsland et al , ). Sox4 is highly expressed by BPs, and Sox11 is expressed both by BPs and newly formed neurons (Chen et al , ). Conditional inactivation of Sox4 in the neuroepithelium leads to a strong reduction of BPs while Sox11 deletion leads to decreased post‐mitotic neurons (Chen et al , ). Combined phenotypes from loss of Sox4 / 11 thus resemble the neurogenesis defects observed in DKO embryos. Mouse Sox4 contains a putative GSK3 phosphorylation site consisting of three serines spaced by three amino acids (SxxSSxxSS(316)P) (Fig ), a common feature of GSK3 motifs (Beurel et al , ). Mouse Sox11 contains two putative GSK3 phosphorylation sites: (S(244)PxxS) and (S(289)PxxSxxxS) (Fig ). The Sox4/11 GSK3 sites are highly conserved in other mammals such as chimps and humans. Moreover, S315 and S316 of the Sox4 motif, and S244 and S289 of the Sox11 motifs, are phosphorylated according to PhosphoSitePlus (Hornbeck et al , ) (Fig ). Sox11 S244 and S289 phosphorylation was also reported elsewhere (Balta et al , ). Finally, in silico analysis by NetPhos 3.1 (Blom et al , ) predicted GSK3 to be a top candidate for phosphorylating S316 of the Sox4 motif and both S244 and S289 of the Sox11 motifs (Fig ). To determine whether Sox4/11 are indeed phosphorylated, we overexpressed N‐terminally Flag‐tagged Sox4 / 11 in HEK293T (293T) cells and treated protein lysates with λ‐phosphatase. Immunoblot analysis revealed a downshift of Sox4 from 70 to 65 kDa (Fig ), and a downshift of a higher molecular weight smear representing phosphorylated Sox11 (Fig ). Endogenous Sox4 was also downshifted upon λ‐phosphatase treatment (Fig ). By IF, overexpressed Sox4/11 mainly localized to the nucleus with only minor cytoplasmic staining, consistent with their role as transcription factors, validating our constructs (Fig ). Next, we tested the effects of mutating the potential GSK3 phospho‐motifs of Sox4/11. For Sox4, we generated a mutant in which only S316 was mutated to alanine and one in which all five serines were mutated (S316all) (Fig ). For Sox11, we generated a mutant in which S244 and S289 were mutated (S244S289) and one where all serines of both motifs were mutated (S244allS289all) (Fig ). Immunoblot analysis revealed a downshift in the Sox4 S316 mutant, with multiple bands migrating lower than the expected size of wild‐type Sox4. The S316all mutant revealed an even more drastic downshift, mimicking the effect of phosphatase treatment on wild‐type Sox4 (Fig ). For Sox11, a minor downward shift of phosphorylated Sox11 was detected in the S244S289 mutant, while a more obvious shift was detected in S244allS289all mutants, with concomitant increase in the major Sox11 band, corresponding to unphosphorylated Sox11 (Fig ). Next, we tested the effect of GSK3β overexpression on Flag‐Sox4/11 by co‐transfection in 293T cells. GSK3β overexpression led to a slight but significant decrease in total Sox4 protein levels, with a greater decrease observed in non‐phosphorylated Sox4 (65 kDa band) (Fig ). Importantly, this effect was reversed by 1‐h treatment with the GSK3 inhibitor 6‐bromoindirubin‐3'‐oxime (BIO) (Fig ). GSK3β overexpression and BIO treatment had no effect on the S316all mutant form of Sox4, confirming this site is a GSKβ phospho‐motif (Fig ). GSK3β overexpression also reduced Sox11 protein levels, especially in non‐phosphorylated Sox11, and this effect was reversed by 1 h BIO treatment (Fig ). GSK3β overexpression and BIO treatment had no effect on the S244allS289all mutant form of Sox11 (Fig ). To test whether Sox4/11 are directly phosphorylated by GSK3, we immunopurified wild‐type and mutant (S316all and S244allS289all) Sox4 and Sox11, treated the proteins with λ‐phosphatase and then performed in vitro kinase assays by incubating with recombinant GSK3β and gamma‐ 32 P‐labelled ATP. Wild‐type Sox4/11 were both highly phosphorylated by GSK3β, while only minimal phosphorylation of the corresponding mutants could be detected (Fig ). Because WNT/STOP signalling inhibits GSK3β to protect target proteins from proteasomal degradation (Taelman et al , ; Acebron et al , ), we tested whether Sox4/11 are ubiquitinated by co‐transfecting HA‐tagged ubiquitin with Flag‐Sox4 / 11 and then briefly treating with MG132 to block proteasomal degradation of ubiquitin‐conjugated polypeptides. Pull‐down of Sox proteins with FLAG followed by immunoblot against HA‐ubiquitin revealed a characteristic polyubiquitin‐smear in both Sox4 and Sox11 ‐transfected cells, indicating that both Sox proteins are ubiquitin‐conjugated (Fig ). We conclude that Sox4/11 are (i) directly phosphorylated by GSK3β and (ii) regulated by proteasomal degradation. Sox4/11 protein levels are decreased in DKO NPCs, predominantly during mitosis We next tested whether Sox4/11 are regulated by GSK3 in NPCs, focusing first on Sox4. We treated cultured NPCs with BIO for 24 h and analysed endogenous Sox4 protein levels by immunoblot analysis. BIO treatment led to an overall increase in Sox4, with an even greater increase in the lower, non‐phosphorylated band (Fig ). Importantly, mutant Ccny / l1 NPCs showed significantly reduced Sox4 protein levels (Fig ), even though Sox4 mRNA levels where slightly elevated (Fig ). We next analysed Sox4 protein levels in the neocortex of E13.5 embryos in vivo . By IF, Sox4 co‐localized with Ccny and pLRP6 (S1490) in the SVZ (Fig ), and was concentrated in mitotic BPs (Fig ). In APs, Sox4 was enriched at mitosis (Fig ). In DKO embryos, we could not detect an obvious decrease in Sox4 staining intensity in Tbr2 + cells (Fig ). Likewise, immunoblot analysis of DKO forebrain lysates showed no reduction in total Sox4 protein levels (Fig ), and qPCR analysis of RNA extracted from dorsal forebrains revealed no significant changes in Sox4 expression ( P = 0.87; Fig ). In contrast, focusing on mitotic (pHH3 + ) BPs by IF, Sox4 protein levels were greatly reduced in DKO neocortex (−42%, P = 0.0002; Fig ). To further investigate WNT/STOP regulation of Sox4 during mitosis, we treated NPC cultures with nocodazole to arrest cells in G2/M. Interestingly, nocodazole‐treated mutant NPCs displayed a greater reduction of Sox4 compared to non‐treated cells (−45%, P = 0.02 vs. −25%, P = 0.001) (Fig ). G2/M arrest by nocodazole treatment was validated by FACS analysis of NPCs (Fig ). To further investigate Sox4 regulation in NPCs, we raised a phospho‐specific antibody targeting the S316 site of Sox4 (pSox4) (Fig , schematic). Immunoblot analysis of overexpressed Flag‐Sox4 in 293T cell lysates treated with λ‐phosphatase revealed a specific band at 70 kDa in the non‐treated samples only, demonstrating specificity of the antibody towards phosphorylated Sox4 (Fig ). The pSox4 antibody did not detect the S316 mutant upon immunoblotting (Fig ). Treatment of Flag‐Sox4 ‐transfected 293T cells with BIO led to decreased Sox4 phosphorylation, and this effect was greater in nocodazole‐treated cells (Fig ). Moreover, immunoblot analysis of forebrains revealed a specific band at 70 kDa, representing phosphorylated Sox4, that was increased in DKO embryos (Fig ), consistent with elevated phosphorylation by GSK3. For Sox11, Ccny / l1 mutant NPCs revealed strongly reduced protein levels and elevated (not significant) mRNA levels, similar to Sox4 (Figs and ). Moreover, IF analysis on E13.5 neocortex sections revealed significantly decreased Sox11 protein levels in mitotic cells within the SVZ (−23%, P = 0.04) (Fig ). Sox11 protein levels were also decreased in newborn neurons and non‐mitotic BPs, as evidenced by immunoblot analysis of forebrain protein lysates and co‐IF with Tbr2 (−10%, P = 0.01), respectively (Fig ). Expression of Sox11 was not significantly altered in DKO dorsal forebrains ( P = 0.46), confirming the regulation of Sox11 by Ccny / l1 is post‐transcriptional (Fig ). Finally, Sox11 co‐localized with Ccny and pLRP6 in the SVZ (Fig ). In summary, Ccny / l1 deficiency leads to decreased Sox4/11 protein levels in NPCs and predominantly in mitotic cells, consistent with Sox4/11 being novel WNT/STOP targets in the developing neocortex. Sox4 / 11 overexpression and GSK3 inhibition both rescue differentiation defects of DKO NPCs To test whether Sox4/11 misregulation accounts for the decreased cortical neurogenesis observed in DKO embryos, we attempted to rescue the neurogenesis defect in mutant NPCs by overexpressing both proteins using lentiviral transduction of Flag ‐ Sox4 and Flag‐Sox11 in the pLenti‐CAG‐IRES‐EGFP vector (Fig , schematic). To avoid possible toxic effects resulting from excessive viral load, we isolated NPCs from DKO and control ( Ccny +/− Ccnyl1 +/− ) E13.5 neocortex instead of performing shRNA knockdown of Ccny . DKO NPCs exhibited similar neurogenesis defects as sh Ccny ‐treated Ccnyl1 mutants, showing reduced Tuj1 staining (Fig ). For rescue experiments, we transduced DKO and control NPCs with Sox4 and Sox11 ‐overexpressing lentiviruses, or empty pLenti‐CAG‐IRES‐EGFP vector lentivirus as a control, incubated the cultures in differentiation medium, and then monitored neuronal output by IF for Tuj1 and Map2 (microtubule‐associated protein 2, an additional neuron marker). IF for GFP or FLAG was used to identify NPCs infected with control or Sox4 / 11 lentiviruses, respectively. Overexpression of Sox4 / 11 was monitored by qPCR and IF, which revealed, respectively, increased Sox4 / 11 mRNA levels and high transduction efficiency in NPCs (Appendix Fig ). Strikingly, Sox4 / 11 overexpression in DKO NPCs led to an almost complete rescue in the number of Tuj1 + and Map2 + cells generated when compared to DKO NPCs transduced with empty vector (Fig ; Appendix Fig ). Importantly, control cells transduced with Sox4 / 11 lentiviruses showed no significant increase in the number of newly formed Tuj1 + /Map2 + neurons (Fig ; Appendix Fig ). Interestingly, many of the rescued Tuj1 + /Map2 + cells lacked mature neurites (Fig , asterisks; Appendix Fig , asterisks; quantified in Appendix Fig ), suggesting that Sox4 / 11 overexpression can rescue initial stages of neurogenesis but not full neuronal differentiation in DKO NPCs. Finally, to corroborate that the DKO differentiation phenotype in cultured NPCs is indeed due to diminished WNT signalling, we carried out a rescue experiment. We treated control and DKO NPCs with the GSK3 inhibitor CHIR99021 (CHIR) and monitored neuronal output by IF for Tuj1 (Fig , schematic). Strikingly, 1 µm CHIR treatment in DKO NPCs led to a significant increase in the number of, mostly, immature neurons, while 3 µm CHIR treatment restored the level of mature neurons to that of controls, thereby leading to a complete rescue (Fig ). CHIR treatment also promoted neurogenesis in control NPCs, which is consistent with previous reports (Rosenbloom et al , ), and however, the increase in neurogenesis observed was significantly lower when compared to CHIR‐treated DKO NPCs (Fig ). Hence, we conclude that Ccny/l1 promote differentiation of cultured NPCs through the WNT signalling pathway.
To study the role of WNT/STOP signalling in the developing mouse neocortex, we generated embryos deficient for Ccny and Ccnyl1 (hereafter referred to as double knockout (DKO) embryos). In contrast to individual Ccny or Ccnyl1 ‐deficient embryos, which are both viable (An et al , ; Koch et al , ), DKO embryos displayed in utero death beginning at embryonic day 14.5 (E14.5). To avoid that the results of our analyses of DKO embryos might reflect non‐specific effects resulting from early lethality, we analysed DKO and littermate control embryos at E13.5. Analysis of haematoxylin–eosin (HE)‐stained DKO forebrains revealed a significantly thinner neocortical wall (−32%, P = 0.0004) compared to littermate controls. Mediolateral neocortex length was not significantly changed (Fig ; quantified in Appendix Fig ). To further dissect the reduction in the thickness of the neocortical wall observed in E13.5 DKO embryos, we measured VZ, SVZ and intermediate zone (IZ) plus cortical plate (CP) thickness upon DNA staining and immunofluorescence microscopy (IF) for T‐box brain protein 2 (Tbr2), a BP marker that permitted visualization of the SVZ. This revealed that within the thinner neocortical wall of DKO embryos, the proportion of neocortical wall thickness constituted by the VZ was increased by 12%, whereas that constituted by the SVZ and IZ+CP was decreased by 20% and 18%, respectively, when compared to the respective proportions in the thicker neocortical wall of control embryos (Fig ; Appendix Fig ). To corroborate these observations, we quantified the number of APs, BPs and post‐mitotic deep‐layer neurons upon IF for their respective markers Paired box protein 6 (Pax6), Tbr2, and T‐box brain protein 1 (Tbr1). The percentage of Pax6 + cells in the thinner neocortex of E13.5 DKO embryos was slightly increased (+11%, P = 0.04) whereas the percentages of Tbr2 + and Tbr1 + cells were decreased (−38%, P = 0.0006; −25%, P = 0.01; respectively) when compared to the thicker neocortex of control embryos (Fig ). Consistent with this, IF for βIII‐tubulin (Tuj1), which marks newborn neurons, revealed that in the thinner neocortical wall of E13.5 DKO embryos, the layers containing newborn neurons comprised a lesser proportion of the neocortical wall thickness than in the thicker neocortical wall of control embryos (control 50.9 ± 5.5 vs. DKO 33.2 ± 1.0 (% of total cortex thickness), P = 0.048; Fig ). Together, these data indicated that the reduction in neocortical wall thickness in E13.5 DKO embryos was primarily due to a decrease in the levels of BPs and newborn neurons, which in turn suggested that cortical neurogenesis is reduced in DKO embryos. Decreased BP and, consequently, post‐mitotic neuron levels can be due, at least in part, to structural defects in neocortical cytoarchitecture, such as improper organization of the radial glial scaffold and disruption of apical‐basal polarity, which may impede migration of BPs to the SVZ and of newborn neurons to the CP (Taverna et al , ). However, IF against the radial glia‐specific intermediate filament marker nestin revealed no overt abnormalities in the radial glial scaffold of E13.5 DKO neocortex when compared to control (Fig ). Also, we observed normal enrichment of β‐catenin at the apical cell cortex of DKO forebrains, suggesting that apical–basal polarity was not affected in the absence of Ccny/l1 (Fig ). We conclude that DKO embryos display neurogenesis defects but no major structural abnormalities in the embryonic neocortex. We performed IF for Ccny and Ccnyl1 in the E12.5‐13.5 mouse neocortex. Interestingly, both Ccny and Ccnyl1 immunoreactivity was concentrated at the apical cell cortex/apical plasma membrane of the VZ of control embryos (Fig ). No Ccny or Ccnyl1 immunoreactivity was detected in the neocortex of DKO embryos, confirming that the two polyclonal Ccny and Ccnyl1 antibodies used are specific (Fig EV1E and F). Ccny and Ccnyl1 immunoreactivity was also detected in, respectively, 28 ± 0.63% and 24 ± 0.68% of Tbr2 + BPs in the SVZ (Fig ). Ccny/l1 were not detected in post‐mitotic neurons. To further analyse the distribution of Ccny/l1 protein in APs and BPs, we performed in utero electroporation (IUE) in E13.5 embryos with a plasmid coding for green fluorescent protein (GFP) and analysed embryos at E15.5. Triple IF for Ccny/l1, the AP marker Sox2 (SRY (sex determining region Y)‐2), and GFP confirmed Ccny/l1 immunoreactivity at the apical membrane of electroporated APs (Fig ), while triple IF for Ccny/l1, Tbr2 and GFP revealed Ccny/l1 immunoreactivity as single puncta in BPs (Fig ). In light of these observations, we next performed IF with an LRP6 antibody specific for the casein kinase 1 gamma (CK1γ) phosphorylation site T1479, which marks active WNT signalling (Davidson et al , ). Similar to the Ccny and Ccnyl1 immunoreactivity, phospho‐T1479 LRP6 immunoreactivity was also found to be concentrated at the apical cell cortex / apical plasma membrane of the VZ of E13.5 control mouse neocortex (Fig ). This concentration reflected the specific enrichment of phospho‐LRP6 immunoreactivity in 91% of mitotic APs analysed (Fig , arrowheads, inset). The apical concentration of phospho‐LRP6 immunoreactivity in mitotic APs is consistent with the fact that mitosis of APs typically occurs at the ventricular surface of the neocortex (Taverna et al , ). Phospho‐T1479 LRP6 immunoreactivity was also observed in 48% of mitotic BPs of E12.5 control mouse neocortex (Fig , inset). To confirm this pattern of immunostaining, we performed IF on control embryonic mouse neocortex with another phospho‐LRP6 antibody that detects the CDK14 priming phosphorylation site S1490 (Davidson et al , ). Again, phospho‐S1490 LRP6 immunoreactivity was enriched in E13.5 mitotic APs (90%) and BPs (88%), with the mitotic stage of these progenitors being confirmed by co‐immunostaining with the mitotic marker phospho‐histone H3 (pHH3) (Fig , insets and arrowhead). Mitotic APs also showed Ccny (30%) and Ccnyl1 (29%) immunoreactivity (Fig ). IF for total LRP6, CDK14 and GSK3β revealed the greatest concentration of immunoreactivity at the apical cell cortex / apical plasma membrane of the VZ of E13.5 control mouse neocortex (Fig ). Altogether, these data suggest that the core components of the WNT/STOP signalling pathway are expressed in the embryonic mouse neocortex and that WNT/LRP6 signalling peaks during mitosis in APs and BPs. To determine whether WNT signalling in DKO forebrains was affected at the receptor level, we quantified the immunostaining intensity for active phospho‐LRP6 (T1479) within mitotic APs and BPs of the E13.5 neocortex and detected a decrease (−15%, P = 0.01) when compared to controls (Fig ). Phosphorylation of LRP6 at S1490 was even more markedly reduced (−45%, P = 0.001) when analysed by immunoblotting of protein lysates extracted from E13.5 DKO and control forebrains (Fig ). To monitor canonical WNT signalling, we performed RNAScope analysis on sections of E13.5 neocortex using a probe against the β‐catenin target gene Axin2 . Axin2 expression was not significantly changed in the neocortex of DKO embryos when compared to controls (Fig ). To confirm this result, we extracted RNA from E13.5 dorsal forebrains and performed qPCR analysis. Expression of Axin2 and N‐myc , another WNT target gene in the neocortex (Kuwahara et al, ), was not significantly altered in DKO dorsal forebrains ( Axin2 , P = 0.95; N‐myc, P = 0.41) (Fig ). Furthermore, immunoblot analysis of whole forebrain lysates probed with an antibody against dephosphorylated β‐catenin, which represents its active form, showed no significant change in E13.5 DKO embryos (Fig ). We conclude that combined Ccny / l1 deficiency in the embryonic mouse neocortex leads to decreased LRP6 receptor activation without changes in β‐catenin activity. Together with the reduction in neuron levels in the DKO neocortex, these data are consistent with WNT/STOP signalling being required for neurogenesis in the embryonic mouse neocortex.
In light of the reduced levels of BPs and neurons in the neocortex of DKO embryos, we analysed cell cycle parameters of APs and BPs in control and DKO embryos, as alterations in cell cycle progression have been shown to affect NPC fate and cortical neurogenesis (Götz & Huttner, ; Dehay & Kennedy, ; Arai et al , ; Borrell & Calegari, ). To label NPCs in S‐phase in the neocortex of control and DKO embryos, we injected the thymidine analog bromo‐deoxyuridine (BrdU) at E11.5, E12.5 and E13.5 and sacrificed mice 1 h later. Co‐IF for Pax6, Tbr2 and BrdU did not reveal any major difference between control and DKO neocortex in the proportion of APs (Pax6 + Tbr2 − ) that were in S‐phase (i.e. BrdU + ) at E11.5 and E12.5, although a slight increase was detected for DKO neocortex at E13.5 (Fig ). The average percentage of E13.5 BrdU + APs, i.e. APs in S‐phase, was 31% for control and 39% for DKO neocortex (Fig ). The proportion of BPs (Tbr2 + ) that were in S‐phase (BrdU + ) was moderately decreased in DKO neocortex compared to control at all time points analysed (Fig ). The average percentage of E13.5 BrdU + BPs, i.e. BPs in S‐phase, was 24% for control and 21% for DKO neocortex (Fig ). We next determined the length of S‐phase of APs and BPs in E13.5 control and DKO neocortex. To this end, we performed timed injections of the thymidine analogs iodo‐deoxyuridine (IdU) and BrdU. Briefly, IdU was injected at T = 0 to label APs and BPs in S‐phase, and BrdU was injected at T = 1.5 h to identify those APs and BPs that were still in S‐phase at this time vs. those that had left S‐phase. Embryos were collected 30 min after BrdU injection (Fig , schematic). We then extrapolated from the percentage of APs (Tbr2 − ) and BPs (Tbr2 + ) of control and DKO neocortex that were IdU + but BrdU − , i.e. that had left S‐phase after 1.5 h (Fig , yellow; Appendix Fig ), and determined the time when all control and DKO APs and BPs would have left S‐phase, which yielded the length of S‐phase for control and DKO APs and BPs (Fig ). This revealed a small increase in S‐phase length of E13.5 DKO APs as compared to control APs, and a ≈ 50% increase in S‐phase length of E13.5 DKO BPs as compared to control BPs (Fig ). Knowing the percentage values of E13.5 control and DKO APs and BPs in S‐phase (Fig ), and the length of S‐phase of these types of NPCs (Fig ), allowed us to calculate the total length of the cell cycle of E13.5 control and DKO APs and BPs by dividing the S‐phase length values (Fig ) by the percentage values for these NPCs in S‐phase (Fig ) and then multiplying the resulting numbers with 100 (Fig ). This revealed no difference between E13.5 control and DKO APs, but a nearly doubling of total cell cycle length in DKO BPs as compared to control (Fig ). To calculate the length of mitosis, we first performed co‐IF for pHH3, Ki67 and Tbr2 (Fig ) to determine the percentage of cycling (Ki67 + ) E13.5 control and DKO APs (Tbr2 − ) and BPs (Tbr2 + ) that were in mitosis (pHH3 + ) (Fig ). This revealed no major differences between control and DKO APs, nor between control and DKO BPs (Fig ). We then calculated the length of mitosis of E13.5 control and DKO APs and BPs by multiplying the total cell cycle length values for each of these NPCs (Fig ) with the percentage values for the respective NPC in mitosis (Fig ) and divided the resulting numbers by 100, which yielded the length of mitosis (Fig ). While no significant difference in mitosis length was found between control and DKO APs, mitosis length of DKO BPs was found to be more than doubled as compared to control (Fig ). Data consistent with these findings were obtained when the proportion of APs and BPs in mitosis, deduced from ventricular and abventricular pHH3 + cells, respectively, were compared between E11.5‐E13.5 control and DKO neocortex (Fig ). All cell cycle calculations are detailed in the . Mitotic delay can lead to cell cycle arrest and increased apoptosis (Chen et al , ; Pilaz et al , ). To compare apoptosis levels in DKO vs. control forebrains, we performed IF with an antibody against cleaved caspase 3 (Fig ). Except for a minor, however statistically not significant, increase at E11.5, apoptosis levels in the E12.5 and E13.5 neocortex of DKO embryos were unchanged when compared to controls (Fig ). This was confirmed by TUNEL staining, which at E13.5 was also not significantly altered in the neocortex of DKO embryos (Fig ). Thus, apoptosis is unlikely to explain reduced thickness of the cortical plate in DKO embryos. We conclude that the reduction in DKO embryos of cortical plate thickness involves delayed cell cycle progression and increased mitosis length in BPs.
Ccny / l1 expression reduces asymmetric AP division and neurogenesis in the embryonic neocortex The relative increase in the thickness of the VZ and the reduced BP levels in DKO embryos raised the possibility that their thinner neocortex was not only due to a delayed cell cycle progression of BPs. Instead, the results suggested that Ccny / l1 may affect the generation of BPs from APs, and consequently of post‐mitotic neurons. In the mammalian neocortex, the switch of APs from symmetric, proliferative divisions to asymmetric, BP‐genic divisions is often associated with changes in apical membrane distribution, whereby unequal inheritance of the apical plasma membrane by the daughter cells indicates an asymmetric mode of AP division (Kosodo et al , ; Delaunay et al , ). We quantified the number of symmetrically vs. asymmetrically dividing APs by IF using the “cadherin hole” method, which allows determination of the plane of division in mitotic APs (Fig ) (Kosodo et al , ). Interestingly, E13.5 DKO neocortex displayed a 36% reduction in asymmetric AP division (from 47 to 30% of all AP division, Fig ), consistent with decreased generation of BPs in the absence of Ccny / l1 . Mitotic spindle orientation is critical for determining symmetric vs. asymmetric cell division of NPCs in the developing neocortex (Konno et al , ; Yingling et al , ; Lizarraga et al , ; Asami et al , ; LaMonica et al , ; Xie et al , ; Mora‐Bermúdez & Huttner, ), with a key role of astral microtubules (aMTs) (Mora‐Bermúdez et al , ). A specific subpopulation of aMTs, which reach the apical or basal cell cortex and are referred to as apical‐basal aMTs, are more abundant in symmetrically dividing APs than in asymmetrically dividing APs (Mora‐Bermúdez et al , ). These apical–basal aMTs promote a mitotic spindle orientation perpendicular to the apical–basal axis of APs and reduce the variability of mitotic spindle orientation, which in turn favours symmetric AP division (Mora‐Bermúdez et al , ). Perturbations specifically of these apical–basal aMTs increases asymmetric BP‐genic APs divisions and neurogenesis (Mora‐Bermúdez et al , ). To examine aMT abundance, we performed IF for α‐tubulin, acquired Z‐stack images of dividing APs and quantified the number of apical–basal aMTs vs. the aMTs reaching the central cell cortex (central aMTs) (Fig ). Strikingly, E13.5 DKO APs exhibited an increase specifically in apical‐basal, but not central, aMTs (Fig ), providing a mechanistic explanation for the increase in symmetric AP division (Fig ). We conclude that in dividing APs, lack of Ccny / l1 expression results in an increase in the number of apical–basal aMTs, which reduces asymmetric, BP‐genic cell division and hence neurogenesis in the embryonic neocortex. DKO embryos display premature embryonic lethality, limiting our analysis of neurogenesis to early stages of development in which the cortical plate is less developed. To analyse the effect of Ccny / l1 knockdown in NPCs at later time points, we performed IUE of mouse embryos at E13.5 with plasmids coding for either a control shRNA ( Co ) or shRNAs against Ccny and Ccnyl1 (sh Ccny / l1 ), along with a plasmid coding for GFP. At E15.5, knockdown of Ccny / l1 greatly reduced the proportion of the GFP + progeny of the electroporated cells that were Tbr2 + , i.e. BPs (Fig ). Similarly, at E17.5, the proportion of the GFP + progeny of the electroporated cells that expressed the deep‐layer neuron marker Ctip2 was depleted (Fig ). Altogether, these data corroborate that lack of Ccny / l1 expression reduces neurogenesis in the embryonic mouse neocortex and show that this effect is consistent throughout embryonic development. The IUE results also exclude the possibility that the neurogenesis defect observed in DKO embryos is due to an overall delay in embryonic development, or results from indirect effects related to global deletion of Ccny / l1 . We next asked whether NPCs deficient for Ccny / l1 also display a reduced capacity to generate neurons in vitro . To this end, we isolated NPCs from E13.5 Ccny +/− ; Ccnyl1 −/− forebrains and transduced them with a lentivirus expressing shRNA against Ccny (hereafter referred to as mutant NPCs) to induce acute Ccny / l1 deficiency. NPCs were then cultured in differentiation media to promote neurogenesis. IF for Tuj1 revealed a drastic decrease in the percentage of cells expressing this marker of newborn neurons derived from the mutant NPCs, as well as a significant reduction in neurite length (control 47.7 ± 1.4 µm vs. mutant 23.8 ± 1.8 µm, P = 0.0006; Fig ). The defect in neuron generation was confirmed by qPCR analysis, which demonstrated a major decrease of the neuronal markers Tuj1 and doublecortin in mutant NPC cultures (Fig ). The neural stem cell marker Sox2 showed a slight but not significant increase in mutant cells (Fig ). Levels of phospho‐LRP6 (S1490), marking active WNT receptor signalling, were drastically reduced, whereas Axin2 expression was not significantly altered in the mutant NPC cultures, confirming that the effects of Ccny / l1 during in vitro differentiation are independent of canonical WNT/β‐catenin signalling (Fig ). Furthermore, immunoblotting of total protein lysates from mutant NPCs showed overall strongly increased lysine‐48 ubiquitination (Fig ), indicating globally enhanced protein degradation, a hallmark of impaired WNT/STOP signalling (Taelman et al , ; Acebron et al , ). The results provide further support that Ccny/l1 and WNT/STOP signalling promote asymmetric AP division and neurogenesis in the embryonic neocortex.
WNT/STOP signalling acts by protecting proteins carrying GSK3 phosphodegrons from proteasomal degradation. To identify downstream targets of WNT/STOP signalling, we screened for effector proteins with a role in neurogenesis and neuronal differentiation that harbour potential GSK3 phosphorylation sites. Two candidates were the SRY ‐related high‐mobility‐group box proteins Sox4 and Sox11 , SoxC subclass transcription factors. Sox4 / 11 display pan neuronal activity and are indispensable for neurogenesis and neuronal differentiation (Bergsland et al , ). Sox4 is highly expressed by BPs, and Sox11 is expressed both by BPs and newly formed neurons (Chen et al , ). Conditional inactivation of Sox4 in the neuroepithelium leads to a strong reduction of BPs while Sox11 deletion leads to decreased post‐mitotic neurons (Chen et al , ). Combined phenotypes from loss of Sox4 / 11 thus resemble the neurogenesis defects observed in DKO embryos. Mouse Sox4 contains a putative GSK3 phosphorylation site consisting of three serines spaced by three amino acids (SxxSSxxSS(316)P) (Fig ), a common feature of GSK3 motifs (Beurel et al , ). Mouse Sox11 contains two putative GSK3 phosphorylation sites: (S(244)PxxS) and (S(289)PxxSxxxS) (Fig ). The Sox4/11 GSK3 sites are highly conserved in other mammals such as chimps and humans. Moreover, S315 and S316 of the Sox4 motif, and S244 and S289 of the Sox11 motifs, are phosphorylated according to PhosphoSitePlus (Hornbeck et al , ) (Fig ). Sox11 S244 and S289 phosphorylation was also reported elsewhere (Balta et al , ). Finally, in silico analysis by NetPhos 3.1 (Blom et al , ) predicted GSK3 to be a top candidate for phosphorylating S316 of the Sox4 motif and both S244 and S289 of the Sox11 motifs (Fig ). To determine whether Sox4/11 are indeed phosphorylated, we overexpressed N‐terminally Flag‐tagged Sox4 / 11 in HEK293T (293T) cells and treated protein lysates with λ‐phosphatase. Immunoblot analysis revealed a downshift of Sox4 from 70 to 65 kDa (Fig ), and a downshift of a higher molecular weight smear representing phosphorylated Sox11 (Fig ). Endogenous Sox4 was also downshifted upon λ‐phosphatase treatment (Fig ). By IF, overexpressed Sox4/11 mainly localized to the nucleus with only minor cytoplasmic staining, consistent with their role as transcription factors, validating our constructs (Fig ). Next, we tested the effects of mutating the potential GSK3 phospho‐motifs of Sox4/11. For Sox4, we generated a mutant in which only S316 was mutated to alanine and one in which all five serines were mutated (S316all) (Fig ). For Sox11, we generated a mutant in which S244 and S289 were mutated (S244S289) and one where all serines of both motifs were mutated (S244allS289all) (Fig ). Immunoblot analysis revealed a downshift in the Sox4 S316 mutant, with multiple bands migrating lower than the expected size of wild‐type Sox4. The S316all mutant revealed an even more drastic downshift, mimicking the effect of phosphatase treatment on wild‐type Sox4 (Fig ). For Sox11, a minor downward shift of phosphorylated Sox11 was detected in the S244S289 mutant, while a more obvious shift was detected in S244allS289all mutants, with concomitant increase in the major Sox11 band, corresponding to unphosphorylated Sox11 (Fig ). Next, we tested the effect of GSK3β overexpression on Flag‐Sox4/11 by co‐transfection in 293T cells. GSK3β overexpression led to a slight but significant decrease in total Sox4 protein levels, with a greater decrease observed in non‐phosphorylated Sox4 (65 kDa band) (Fig ). Importantly, this effect was reversed by 1‐h treatment with the GSK3 inhibitor 6‐bromoindirubin‐3'‐oxime (BIO) (Fig ). GSK3β overexpression and BIO treatment had no effect on the S316all mutant form of Sox4, confirming this site is a GSKβ phospho‐motif (Fig ). GSK3β overexpression also reduced Sox11 protein levels, especially in non‐phosphorylated Sox11, and this effect was reversed by 1 h BIO treatment (Fig ). GSK3β overexpression and BIO treatment had no effect on the S244allS289all mutant form of Sox11 (Fig ). To test whether Sox4/11 are directly phosphorylated by GSK3, we immunopurified wild‐type and mutant (S316all and S244allS289all) Sox4 and Sox11, treated the proteins with λ‐phosphatase and then performed in vitro kinase assays by incubating with recombinant GSK3β and gamma‐ 32 P‐labelled ATP. Wild‐type Sox4/11 were both highly phosphorylated by GSK3β, while only minimal phosphorylation of the corresponding mutants could be detected (Fig ). Because WNT/STOP signalling inhibits GSK3β to protect target proteins from proteasomal degradation (Taelman et al , ; Acebron et al , ), we tested whether Sox4/11 are ubiquitinated by co‐transfecting HA‐tagged ubiquitin with Flag‐Sox4 / 11 and then briefly treating with MG132 to block proteasomal degradation of ubiquitin‐conjugated polypeptides. Pull‐down of Sox proteins with FLAG followed by immunoblot against HA‐ubiquitin revealed a characteristic polyubiquitin‐smear in both Sox4 and Sox11 ‐transfected cells, indicating that both Sox proteins are ubiquitin‐conjugated (Fig ). We conclude that Sox4/11 are (i) directly phosphorylated by GSK3β and (ii) regulated by proteasomal degradation.
We next tested whether Sox4/11 are regulated by GSK3 in NPCs, focusing first on Sox4. We treated cultured NPCs with BIO for 24 h and analysed endogenous Sox4 protein levels by immunoblot analysis. BIO treatment led to an overall increase in Sox4, with an even greater increase in the lower, non‐phosphorylated band (Fig ). Importantly, mutant Ccny / l1 NPCs showed significantly reduced Sox4 protein levels (Fig ), even though Sox4 mRNA levels where slightly elevated (Fig ). We next analysed Sox4 protein levels in the neocortex of E13.5 embryos in vivo . By IF, Sox4 co‐localized with Ccny and pLRP6 (S1490) in the SVZ (Fig ), and was concentrated in mitotic BPs (Fig ). In APs, Sox4 was enriched at mitosis (Fig ). In DKO embryos, we could not detect an obvious decrease in Sox4 staining intensity in Tbr2 + cells (Fig ). Likewise, immunoblot analysis of DKO forebrain lysates showed no reduction in total Sox4 protein levels (Fig ), and qPCR analysis of RNA extracted from dorsal forebrains revealed no significant changes in Sox4 expression ( P = 0.87; Fig ). In contrast, focusing on mitotic (pHH3 + ) BPs by IF, Sox4 protein levels were greatly reduced in DKO neocortex (−42%, P = 0.0002; Fig ). To further investigate WNT/STOP regulation of Sox4 during mitosis, we treated NPC cultures with nocodazole to arrest cells in G2/M. Interestingly, nocodazole‐treated mutant NPCs displayed a greater reduction of Sox4 compared to non‐treated cells (−45%, P = 0.02 vs. −25%, P = 0.001) (Fig ). G2/M arrest by nocodazole treatment was validated by FACS analysis of NPCs (Fig ). To further investigate Sox4 regulation in NPCs, we raised a phospho‐specific antibody targeting the S316 site of Sox4 (pSox4) (Fig , schematic). Immunoblot analysis of overexpressed Flag‐Sox4 in 293T cell lysates treated with λ‐phosphatase revealed a specific band at 70 kDa in the non‐treated samples only, demonstrating specificity of the antibody towards phosphorylated Sox4 (Fig ). The pSox4 antibody did not detect the S316 mutant upon immunoblotting (Fig ). Treatment of Flag‐Sox4 ‐transfected 293T cells with BIO led to decreased Sox4 phosphorylation, and this effect was greater in nocodazole‐treated cells (Fig ). Moreover, immunoblot analysis of forebrains revealed a specific band at 70 kDa, representing phosphorylated Sox4, that was increased in DKO embryos (Fig ), consistent with elevated phosphorylation by GSK3. For Sox11, Ccny / l1 mutant NPCs revealed strongly reduced protein levels and elevated (not significant) mRNA levels, similar to Sox4 (Figs and ). Moreover, IF analysis on E13.5 neocortex sections revealed significantly decreased Sox11 protein levels in mitotic cells within the SVZ (−23%, P = 0.04) (Fig ). Sox11 protein levels were also decreased in newborn neurons and non‐mitotic BPs, as evidenced by immunoblot analysis of forebrain protein lysates and co‐IF with Tbr2 (−10%, P = 0.01), respectively (Fig ). Expression of Sox11 was not significantly altered in DKO dorsal forebrains ( P = 0.46), confirming the regulation of Sox11 by Ccny / l1 is post‐transcriptional (Fig ). Finally, Sox11 co‐localized with Ccny and pLRP6 in the SVZ (Fig ). In summary, Ccny / l1 deficiency leads to decreased Sox4/11 protein levels in NPCs and predominantly in mitotic cells, consistent with Sox4/11 being novel WNT/STOP targets in the developing neocortex.
/ 11 overexpression and GSK3 inhibition both rescue differentiation defects of DKO NPCs To test whether Sox4/11 misregulation accounts for the decreased cortical neurogenesis observed in DKO embryos, we attempted to rescue the neurogenesis defect in mutant NPCs by overexpressing both proteins using lentiviral transduction of Flag ‐ Sox4 and Flag‐Sox11 in the pLenti‐CAG‐IRES‐EGFP vector (Fig , schematic). To avoid possible toxic effects resulting from excessive viral load, we isolated NPCs from DKO and control ( Ccny +/− Ccnyl1 +/− ) E13.5 neocortex instead of performing shRNA knockdown of Ccny . DKO NPCs exhibited similar neurogenesis defects as sh Ccny ‐treated Ccnyl1 mutants, showing reduced Tuj1 staining (Fig ). For rescue experiments, we transduced DKO and control NPCs with Sox4 and Sox11 ‐overexpressing lentiviruses, or empty pLenti‐CAG‐IRES‐EGFP vector lentivirus as a control, incubated the cultures in differentiation medium, and then monitored neuronal output by IF for Tuj1 and Map2 (microtubule‐associated protein 2, an additional neuron marker). IF for GFP or FLAG was used to identify NPCs infected with control or Sox4 / 11 lentiviruses, respectively. Overexpression of Sox4 / 11 was monitored by qPCR and IF, which revealed, respectively, increased Sox4 / 11 mRNA levels and high transduction efficiency in NPCs (Appendix Fig ). Strikingly, Sox4 / 11 overexpression in DKO NPCs led to an almost complete rescue in the number of Tuj1 + and Map2 + cells generated when compared to DKO NPCs transduced with empty vector (Fig ; Appendix Fig ). Importantly, control cells transduced with Sox4 / 11 lentiviruses showed no significant increase in the number of newly formed Tuj1 + /Map2 + neurons (Fig ; Appendix Fig ). Interestingly, many of the rescued Tuj1 + /Map2 + cells lacked mature neurites (Fig , asterisks; Appendix Fig , asterisks; quantified in Appendix Fig ), suggesting that Sox4 / 11 overexpression can rescue initial stages of neurogenesis but not full neuronal differentiation in DKO NPCs. Finally, to corroborate that the DKO differentiation phenotype in cultured NPCs is indeed due to diminished WNT signalling, we carried out a rescue experiment. We treated control and DKO NPCs with the GSK3 inhibitor CHIR99021 (CHIR) and monitored neuronal output by IF for Tuj1 (Fig , schematic). Strikingly, 1 µm CHIR treatment in DKO NPCs led to a significant increase in the number of, mostly, immature neurons, while 3 µm CHIR treatment restored the level of mature neurons to that of controls, thereby leading to a complete rescue (Fig ). CHIR treatment also promoted neurogenesis in control NPCs, which is consistent with previous reports (Rosenbloom et al , ), and however, the increase in neurogenesis observed was significantly lower when compared to CHIR‐treated DKO NPCs (Fig ). Hence, we conclude that Ccny/l1 promote differentiation of cultured NPCs through the WNT signalling pathway.
We set out to determine the role of WNT/STOP signalling in neocortex development by analysing mice mutant for Ccny and Ccnyl1 , key regulators of the pathway. We find that WNT/STOP orchestrates the process of neurogenesis by (i) regulating symmetric vs. asymmetric AP division; (ii) controlling the length of the cell cycle, notably of mitosis, of BPs; and (iii) promoting neuron generation through Sox4 and Sox11 stabilization in BPs. The results suggest that WNT/STOP rather than WNT/β‐catenin signalling is the primary driver of cortical neurogenesis. Our study reconciles seemingly contradictory reports by revealing a division of labour in neocortex development, whereby WNT/STOP promotes a differentiative process, i.e. the generation of NPCs committed to neurogenesis, whereas WNT/β‐catenin predominantly promotes NPC self‐renewal. A division of labour between WNT/STOP and WNT/β‐catenin signalling during neocortical neurogenesis The precise biological role and molecular function of WNT signalling in the mouse neocortex has remained controversial. Most in vivo studies suggest that the primary role of canonical WNT signalling is to promote self‐renewal at the expense of differentiation. This is particularly evident in β‐catenin mutant mice, which display increased AP cell cycle exit and premature neurogenesis (Mutch et al , ; Draganova et al , ). Although some studies have shown that β‐catenin can also promote neuronal differentiation of NPCs (Hirabayashi et al , ; Israsena et al , ; Kuwahara et al , ), these studies were mostly performed in vitro, and are not supported by in vivo evidence. More recently, electroporation of Wnt3a into the neocortex of developing mouse embryos increased both, proliferation of APs and differentiation of BPs, seemingly resolving the controversy and leading the authors to conclude that canonical WNT can indeed promote both processes in a cell type‐specific manner (Munji et al , ). However, this study relied on manipulating WNT signalling at the ligand level, which not only affects WNT/β‐catenin but also WNT/STOP signalling, which was not examined. WNT/STOP acts through the same WNT/LRP6/GSK3 axis but bifurcates upstream‐ and is independent of β‐catenin. Its primary output is not transcription but post‐transcriptional protein stabilization (Acebron & Niehrs, ). We now propose a new model (Fig ), whereby WNT/STOP promotes neuron generation from BPs through stabilization of Sox4/11 while canonical WNT/β‐catenin signalling promotes AP self‐renewal. Our conclusions based on the analysis of E13.5 DKO embryos are further supported by the results of Ccny / l1 knockdown at later stages of neocortex development in vivo and of NPC culture experiments in vitro . Together, these data show reduced generation of BPs, and consequently of neurons, in the absence of Ccny/l1. We therefore conclude that DKO embryos exhibit reduced neurogenesis through decreased generation of BPs from APs and of post‐mitotic neurons from BPs. However, while our data demonstrate that the knockout of Ccny/l1 does not affect WNT/β‐catenin signalling, they do not rule out the possibility that WNT/β‐catenin signalling can also contribute to these processes. LRP6 is a key regulator in WNT/STOP and its phosphorylation at G2/M by the Ccny/CDK complex leads to GSK3 inhibition and activation of the pathway notably during mitosis, when transcription is low (Davidson et al , ). Interestingly, analysis of Lrp6 mutant mice reveals a thinner neocortex, reduced neuronal differentiation, and only minor changes in proliferation (Zhou et al , ). Similarly, deficiency in actin‐binding protein filamin A induces neurogenesis defects within the mouse cerebral cortex and impairs LRP6/GSK3 signalling and asymmetric cell division (Lian et al , ; Lian et al , ). These phenotypes are remarkably similar to those observed here in Ccny / l1 DKO embryos, suggesting that Lrp6 mutants primarily present a WNT/STOP signalling defect. The reason why Lrp6 mutants do not also manifest the proliferation defects of β‐catenin mutants is unclear, but may be due to compensation from LRP5. Functional redundancy between LRP5 and LRP6 in WNT/β‐catenin signalling is well documented (Kelly et al , ; Goel et al , ; Zhong et al , ; Liu et al , ). WNT/STOP signalling promotes neurogenesis via Sox4 and Sox11 stabilization An important finding of this study is the identification of the neurogenic transcription factors Sox4 and Sox11 as bona fide WNT/STOP targets. Several links between SoxC transcription factors and WNT signalling are known: Sox4 and Sox11 interact with the β‐catenin destruction complex and, in lung cancer cells, Sox4 is a direct target of β‐catenin (Bhattaram et al , ; Melnik et al , ). Our data support and further develop these links by showing that Sox4/11 are directly phosphorylated by GSK3 and thereby targeted for proteasomal degradation. We also show that the majority of Sox4 is phosphorylated at a single GSK3 phospho‐motif, while Sox11 contains two motifs and is only partially phosphorylated. GSK3 phosphorylation is known to create phosphodegrons for recognition by E3 ubiquitin ligases leading to proteasomal degradation (Taelman et al , ). Consistent with this possibility, phosphorylation by GSK3β induced ubiquitination and decreased overall Sox4 and Sox11 levels. Previous reports have shown that, in the neocortex, Sox11 cellular localization and protein stabilization are regulated by phosphorylation and ubiquitination, respectively, supporting that post‐transcriptional regulation of Sox11 is essential for its function in vivo (Balta et al , ; Balta et al , ; Chiang et al , ). WNT/STOP mediated inhibition of GSK3 predominantly occurs in G2/M (Acebron et al , ). Consistently, we find that the stabilization of Sox4/11 by WNT/STOP is cell cycle‐dependent, with peak levels in mitotic cells. WNT/STOP signalling in G2/M of mother cells is thought to endow daughter cells with a growth advantage, by stabilizing bulk protein. Deficiency in WNT/STOP signalling reduces G1 growth and delays cell cycle progression (Acebron et al , ; Huang et al , ). Analogously, WNT/STOP signalling in G2/M of NPCs may promote differentiation of daughter cells via elevated levels of Sox4/11 during G1/S‐phase. An attractive hypothesis is that Sox4/11 are bookmarking transcription factors, that remain bound to chromosomes in mitosis to enable target gene reactivation in a timely fashion upon mitotic exit (Palozola et al , ). The importance of decreased Sox4/11 protein levels for the neurogenesis defect of DKO embryos is highlighted by the fact that their overexpression can rescue the differentiation phenotype. Interestingly, the rescue is incomplete: Sox4/11 increase the number of Tuj1 + cells in differentiating DKO NPCs, but only partially restore their morphology, leading to immature neurons. The incomplete rescue may be due to unphysiologically high levels of Sox4 / 11 from lentiviral transduction since Sox11 overexpression inhibits dendritic morphogenesis (Hoshiba et al , ). Alternatively, WNT/STOP signalling may have additional targets required for neuronal maturation and morphogenesis. Indeed, proteomic analysis indicates hundreds of potential WNT/STOP target proteins in HeLa cells (Acebron et al , ). WNT/STOP signalling promotes asymmetric AP division Another important insight of this work is that Ccny / l1 are required to promote asymmetric division of APs, which is in keeping with their physiological role as the starting point of cortical neurogenesis, since more asymmetric AP divisions lead to more BPs and hence post‐mitotic neurons. In C. elegans embryos, WNT signalling is long known to regulate spindle asymmetry (Sugioka et al , ), and in embryonic stem cells, a Wnt3a point source can polarise the mitotic spindle to give rise to asymmetric cell division (Habib et al , ). Taken together, these studies support that WNT/STOP signalling promotes asymmetric AP division. Although we have not identified the molecular targets responsible for the decreased asymmetric AP division in DKO embryos, we narrowed down the mechanism to an increase in apical‐basal aMTs, which can directly stabilize the positioning of the mitotic spindle and thus promote symmetric AP division (Mora‐Bermúdez et al , ). This finding is consistent with recent work in which GSK3 overexpression and inhibition of WNT secretion in HCT116 cells increase MT polymerization rates during mitosis (Lin et al , ). Interestingly, WNT/STOP inhibition has been also shown to induce mitotic spindle defects and improper chromosome segregation (Stolz et al , ; Lin et al , ), pointing towards a more general role of WNT/STOP signalling in promoting mitotic spindle assembly. One possibility is that WNT/STOP signalling modulates MT plus ends, which are key determinants for spindle orientation (Lu & Johnston, ). Misregulation of MT plus ends may also explain the lengthened mitosis observed in DKO BPs. A candidate WNT/STOP target protein is Kif2a (Taelman et al , ), a MT plus end regulator that is required for cortical neurogenesis (Sun et al , ; Ding et al , ). On the other hand, we cannot exclude cross‐talk between WNT/STOP and the WNT/PCP‐pathway, which is also implicated in asymmetric division in cortical progenitors (Delaunay et al , ). Mitosis is a critical time window for determining NPC fate Our study suggests that β‐catenin‐independent WNT signalling impacts neurogenesis preferentially during mitosis: (i) LRP6 is maximally phosphorylated and activated in mitotic NPCs, (ii) Sox4/11 protein levels peak in mitotic NPCs, (iii) apical‐basal aMTs and symmetric cell division are both increased in DKO APs, and (iv) mitosis is retarded in BPs. These observations are consistent with the facts that WNT/STOP signalling stabilizes proteins specifically in mitosis (Acebron et al , ) and regulates cell cycle progression of mitotic cells (Huang et al , ; Stolz et al , ). The process of neurogenesis is intricately connected to the cell cycle length of NPCs (Götz & Huttner, ; Dehay & Kennedy, ; Borrell & Calegari, ). Thus, lengthening the cell cycle of APs induces premature neurogenesis (Calegari & Huttner, ). Conversely, shortening the cell cycle of APs by reducing G1 delays neurogenesis and promotes generation and expansion of BPs (Lange, et al , ). It is unclear at which point in the cell cycle NPCs commit to neuronal differentiation. However, given that Ccny / l1 deficiency reduces neuronal output and that the regulation of neurogenesis by WNT/STOP occurs primarily during cell division, we conclude that mitosis represents a key phase of the cell cycle when NPCs commit to a neurogenic fate. This conclusion reverberates the observation that many human microcephaly associated genes encode mitotic regulators (Hu et al , ). In summary, our findings resolve the controversy regarding the role of WNT signalling in neocortex development by demonstrating that WNT/STOP is the primary driver of a differentiative process in vivo , i.e. the generation of neurons from NPCs, whereas WNT/β‐catenin promotes NPC self‐renewal. They also emphasize the importance of mitosis as a critical determinant of NPC fate, when asymmetric AP division and Sox4/11 protein stabilization in BPs are orchestrated by post‐transcriptional WNT signalling.
The precise biological role and molecular function of WNT signalling in the mouse neocortex has remained controversial. Most in vivo studies suggest that the primary role of canonical WNT signalling is to promote self‐renewal at the expense of differentiation. This is particularly evident in β‐catenin mutant mice, which display increased AP cell cycle exit and premature neurogenesis (Mutch et al , ; Draganova et al , ). Although some studies have shown that β‐catenin can also promote neuronal differentiation of NPCs (Hirabayashi et al , ; Israsena et al , ; Kuwahara et al , ), these studies were mostly performed in vitro, and are not supported by in vivo evidence. More recently, electroporation of Wnt3a into the neocortex of developing mouse embryos increased both, proliferation of APs and differentiation of BPs, seemingly resolving the controversy and leading the authors to conclude that canonical WNT can indeed promote both processes in a cell type‐specific manner (Munji et al , ). However, this study relied on manipulating WNT signalling at the ligand level, which not only affects WNT/β‐catenin but also WNT/STOP signalling, which was not examined. WNT/STOP acts through the same WNT/LRP6/GSK3 axis but bifurcates upstream‐ and is independent of β‐catenin. Its primary output is not transcription but post‐transcriptional protein stabilization (Acebron & Niehrs, ). We now propose a new model (Fig ), whereby WNT/STOP promotes neuron generation from BPs through stabilization of Sox4/11 while canonical WNT/β‐catenin signalling promotes AP self‐renewal. Our conclusions based on the analysis of E13.5 DKO embryos are further supported by the results of Ccny / l1 knockdown at later stages of neocortex development in vivo and of NPC culture experiments in vitro . Together, these data show reduced generation of BPs, and consequently of neurons, in the absence of Ccny/l1. We therefore conclude that DKO embryos exhibit reduced neurogenesis through decreased generation of BPs from APs and of post‐mitotic neurons from BPs. However, while our data demonstrate that the knockout of Ccny/l1 does not affect WNT/β‐catenin signalling, they do not rule out the possibility that WNT/β‐catenin signalling can also contribute to these processes. LRP6 is a key regulator in WNT/STOP and its phosphorylation at G2/M by the Ccny/CDK complex leads to GSK3 inhibition and activation of the pathway notably during mitosis, when transcription is low (Davidson et al , ). Interestingly, analysis of Lrp6 mutant mice reveals a thinner neocortex, reduced neuronal differentiation, and only minor changes in proliferation (Zhou et al , ). Similarly, deficiency in actin‐binding protein filamin A induces neurogenesis defects within the mouse cerebral cortex and impairs LRP6/GSK3 signalling and asymmetric cell division (Lian et al , ; Lian et al , ). These phenotypes are remarkably similar to those observed here in Ccny / l1 DKO embryos, suggesting that Lrp6 mutants primarily present a WNT/STOP signalling defect. The reason why Lrp6 mutants do not also manifest the proliferation defects of β‐catenin mutants is unclear, but may be due to compensation from LRP5. Functional redundancy between LRP5 and LRP6 in WNT/β‐catenin signalling is well documented (Kelly et al , ; Goel et al , ; Zhong et al , ; Liu et al , ).
An important finding of this study is the identification of the neurogenic transcription factors Sox4 and Sox11 as bona fide WNT/STOP targets. Several links between SoxC transcription factors and WNT signalling are known: Sox4 and Sox11 interact with the β‐catenin destruction complex and, in lung cancer cells, Sox4 is a direct target of β‐catenin (Bhattaram et al , ; Melnik et al , ). Our data support and further develop these links by showing that Sox4/11 are directly phosphorylated by GSK3 and thereby targeted for proteasomal degradation. We also show that the majority of Sox4 is phosphorylated at a single GSK3 phospho‐motif, while Sox11 contains two motifs and is only partially phosphorylated. GSK3 phosphorylation is known to create phosphodegrons for recognition by E3 ubiquitin ligases leading to proteasomal degradation (Taelman et al , ). Consistent with this possibility, phosphorylation by GSK3β induced ubiquitination and decreased overall Sox4 and Sox11 levels. Previous reports have shown that, in the neocortex, Sox11 cellular localization and protein stabilization are regulated by phosphorylation and ubiquitination, respectively, supporting that post‐transcriptional regulation of Sox11 is essential for its function in vivo (Balta et al , ; Balta et al , ; Chiang et al , ). WNT/STOP mediated inhibition of GSK3 predominantly occurs in G2/M (Acebron et al , ). Consistently, we find that the stabilization of Sox4/11 by WNT/STOP is cell cycle‐dependent, with peak levels in mitotic cells. WNT/STOP signalling in G2/M of mother cells is thought to endow daughter cells with a growth advantage, by stabilizing bulk protein. Deficiency in WNT/STOP signalling reduces G1 growth and delays cell cycle progression (Acebron et al , ; Huang et al , ). Analogously, WNT/STOP signalling in G2/M of NPCs may promote differentiation of daughter cells via elevated levels of Sox4/11 during G1/S‐phase. An attractive hypothesis is that Sox4/11 are bookmarking transcription factors, that remain bound to chromosomes in mitosis to enable target gene reactivation in a timely fashion upon mitotic exit (Palozola et al , ). The importance of decreased Sox4/11 protein levels for the neurogenesis defect of DKO embryos is highlighted by the fact that their overexpression can rescue the differentiation phenotype. Interestingly, the rescue is incomplete: Sox4/11 increase the number of Tuj1 + cells in differentiating DKO NPCs, but only partially restore their morphology, leading to immature neurons. The incomplete rescue may be due to unphysiologically high levels of Sox4 / 11 from lentiviral transduction since Sox11 overexpression inhibits dendritic morphogenesis (Hoshiba et al , ). Alternatively, WNT/STOP signalling may have additional targets required for neuronal maturation and morphogenesis. Indeed, proteomic analysis indicates hundreds of potential WNT/STOP target proteins in HeLa cells (Acebron et al , ).
Another important insight of this work is that Ccny / l1 are required to promote asymmetric division of APs, which is in keeping with their physiological role as the starting point of cortical neurogenesis, since more asymmetric AP divisions lead to more BPs and hence post‐mitotic neurons. In C. elegans embryos, WNT signalling is long known to regulate spindle asymmetry (Sugioka et al , ), and in embryonic stem cells, a Wnt3a point source can polarise the mitotic spindle to give rise to asymmetric cell division (Habib et al , ). Taken together, these studies support that WNT/STOP signalling promotes asymmetric AP division. Although we have not identified the molecular targets responsible for the decreased asymmetric AP division in DKO embryos, we narrowed down the mechanism to an increase in apical‐basal aMTs, which can directly stabilize the positioning of the mitotic spindle and thus promote symmetric AP division (Mora‐Bermúdez et al , ). This finding is consistent with recent work in which GSK3 overexpression and inhibition of WNT secretion in HCT116 cells increase MT polymerization rates during mitosis (Lin et al , ). Interestingly, WNT/STOP inhibition has been also shown to induce mitotic spindle defects and improper chromosome segregation (Stolz et al , ; Lin et al , ), pointing towards a more general role of WNT/STOP signalling in promoting mitotic spindle assembly. One possibility is that WNT/STOP signalling modulates MT plus ends, which are key determinants for spindle orientation (Lu & Johnston, ). Misregulation of MT plus ends may also explain the lengthened mitosis observed in DKO BPs. A candidate WNT/STOP target protein is Kif2a (Taelman et al , ), a MT plus end regulator that is required for cortical neurogenesis (Sun et al , ; Ding et al , ). On the other hand, we cannot exclude cross‐talk between WNT/STOP and the WNT/PCP‐pathway, which is also implicated in asymmetric division in cortical progenitors (Delaunay et al , ).
Our study suggests that β‐catenin‐independent WNT signalling impacts neurogenesis preferentially during mitosis: (i) LRP6 is maximally phosphorylated and activated in mitotic NPCs, (ii) Sox4/11 protein levels peak in mitotic NPCs, (iii) apical‐basal aMTs and symmetric cell division are both increased in DKO APs, and (iv) mitosis is retarded in BPs. These observations are consistent with the facts that WNT/STOP signalling stabilizes proteins specifically in mitosis (Acebron et al , ) and regulates cell cycle progression of mitotic cells (Huang et al , ; Stolz et al , ). The process of neurogenesis is intricately connected to the cell cycle length of NPCs (Götz & Huttner, ; Dehay & Kennedy, ; Borrell & Calegari, ). Thus, lengthening the cell cycle of APs induces premature neurogenesis (Calegari & Huttner, ). Conversely, shortening the cell cycle of APs by reducing G1 delays neurogenesis and promotes generation and expansion of BPs (Lange, et al , ). It is unclear at which point in the cell cycle NPCs commit to neuronal differentiation. However, given that Ccny / l1 deficiency reduces neuronal output and that the regulation of neurogenesis by WNT/STOP occurs primarily during cell division, we conclude that mitosis represents a key phase of the cell cycle when NPCs commit to a neurogenic fate. This conclusion reverberates the observation that many human microcephaly associated genes encode mitotic regulators (Hu et al , ). In summary, our findings resolve the controversy regarding the role of WNT signalling in neocortex development by demonstrating that WNT/STOP is the primary driver of a differentiative process in vivo , i.e. the generation of neurons from NPCs, whereas WNT/β‐catenin promotes NPC self‐renewal. They also emphasize the importance of mitosis as a critical determinant of NPC fate, when asymmetric AP division and Sox4/11 protein stabilization in BPs are orchestrated by post‐transcriptional WNT signalling.
Animals Mice were bred in the Central Animal Laboratory in the DKFZ, Heidelberg, under standardized hygienic conditions. All animal work was conducted according to national and international guidelines and was approved by the state review board of Baden‐Württemberg (protocol no. G‐123/18). Sperm from mice carrying a flanked by loxP (floxed) allele of cyclin Y (Ccny tm1(flox)Smoc ) was obtained from the lab of Arial Zeng (Shanghai, China; An et al , ) and used for in vitro fertilization of wild‐type C57BL/6N oocytes. Heterozygous Ccny‐flox mice were bred with transgenic animals expressing Cre recombinase under the control of the CMV promoter to achieve organism‐wide gene knockout ( Ccny KO). Generation of cyclin Y‐like 1‐deficient (Ccnyl1tm1a (EUCOMM)Wtsi /H; Ccnyl1 KO) mice has been described previously (Koch et al , ). Ccny and Ccnyl1 double knockout embryos (DKO) were generated by incrossing Ccny −/− Ccnyl1 +/− males with Ccny +/− Ccnyl1 −/− females. Ccny +/− ; Ccnyl1 +/− embryos were used as controls. Embryos were analysed at various time points (E11.5, E12.5 and E13.5), and gender was not taken into consideration. For proliferation assays, BrdU (Sigma‐Aldrich, B5002) and IdU (Sigma‐Aldrich, I0050000) were dissolved in 0.9% NaCl and administered to pregnant dams via intraperitoneal (IP) injection at a dose of 50 mg/kg. Injection strategies are indicated in the figure legends. Adult mice were sacrificed by cervical dislocation. For in utero electroporation experiments, C57BL/6J mice were bred at the Biomedical Services Facility of the MPI‐CBG under standardized hygienic conditions and all procedures were conducted in agreement with the German Animal Welfare Legislation after approval by the Landesdirektion Sachsen (licenses: mouse TVV 5/2015 and TVV 20/2020). In utero electroporation Wild‐type C57BL/6J pregnant mice carrying E13.5 embryos were anesthetized using initially 4% isoflurane (Baxter, HDG9623), followed by 2–3% isoflurane during the in utero electroporation (IUE) procedure. The animals were injected subcutaneously with the analgesic (0.1 ml of metamizol, 200 mg/kg). The peritoneal cavity was surgically opened and the uterus exposed. Using borosilicate microcapillary (Sutter instruments, BF120‐69‐10), the embryos were injected intraventricularly with a solution containing 0.1% Fast green (Sigma‐Aldrich, F7252) in sterile PBS, 2 µg/µl of pSuper plasmid (either 2 µg/µl of pSuper‐shcon or 1 µg/µl of pSuper‐shCcny and 1 µg/µl of pSuper‐shCcnyl1), 0.4 µg/µl of pCAGGS GFP. The electroporations (six 50‐msec pulses of 28V at 1 s intervals) were performed using a 3‐mm diameter electrode (BTX genetronics Inc., 45‐0052INT). After surgery, mice received Metamizol in drinking water (1.33 mg/ml). Pregnant mice were sacrificed by cervical dislocation at the indicated time points (E15.5‐E17.5), and embryonic brains were dissected, fixed in 4% PFA, overnight at 4°C and processed for cryo‐sectioning. Cell culture HEK293T (293T) cells were cultured in DMEM (Gibco, 11960‐044) supplemented with 10% v/v foetal bovine serum (Capricorn, FBS‐12A9), 2 mM glutamine (Sigma‐Aldrich, G7513) and 1% v/v penicillin/streptomycin (Sigma‐Aldrich, P0781). Cells were grown at 37°C and 10% CO 2 in a humidified chamber. Transfections were performed using XtremeGENE 9 DNA transfection reagent (Roche, 06366244001) according to the manufacturer’s instructions. Plasmids were transfected at either 200 ng/ml ( Flag‐Sox4 / Sox11 , GFP, HA‐Ubiquitin) or 400 ng/ml ( Myc‐GSK3β ) in 24‐well plates coated with poly‐D‐lysine (Sigma‐Aldrich, P6407). Where indicated, cells were treated with bromoindirubin‐3’‐oxime (BIO) (Cayman Chemical, 13123) or nocodazole (BioTrend, BN0389). Doses and duration of treatments are indicated in the figure legends. DMSO was used as a control for BIO and nocodazole treatments. NPC isolation and culturing NPCs were obtained by incrossing Ccny +/− Ccnyl1 +/− animals and collecting embryos at E13.5. The forebrain cerebral cortex was dissected from individual embryos, dissociated into a single‐cell suspension by repetitive pipetting and then filtered through a 70 µm cell strainer (Corning, 431751). The resulting neurospheres were cultured in NPC media (DMEM/F12 (Invitrogen, p5780), B27 (LIFE, 1074547), glucose (Sigma‐Aldrich, s5761) hepes, progesterone (Sigma‐Aldrich, 12587010) putrescine (Sigma‐Aldrich, p7556), heparin (Sigma‐Aldrich, E4127) penicillin/streptomycin, insulin‐transferrin‐sodium selenite supplements (Roche, H3149), sodium bicarbonate (Sigma‐Aldrich, p5780) and 20 ng/ml EGF (Thermo Fisher, PHG0313) for 5–7 days before being passaged. For passaging, cells were treated with accutase (Capricorn, ACC‐1B) and then pipetted repetitively to obtain a single‐cell suspension. Cells were passaged at least two times before performing experiments. Where indicated, NPCs were treated with BIO, CHIR99021 (Millipore, 361559), or nocodazole. Doses and duration of treatments are indicated in the figure legends. For NPC differentiation assays, 2 × 10 5 /cells were plated on poly‐D‐lysine‐coated 24‐well plates for 48 h, and then, media was switched to NPC differentiation media (DMEM/F12, B27, glucose, hepes, progesterone, putrescine, heparin, penicillin/streptomycin, insulin–transferrin–sodium selenite supplements, and sodium bicarbonate). Cells were grown for 7–9 days to a confluency ˜ 80% before being harvested for RNA, protein or IF analysis. Media was replaced every 48 h. Knockdown of Ccny was performed by transducing shCcny into NPCs 24 h prior to differentiation assays. For Sox4/11 rescue experiments, Flag‐Sox4 and Flag‐Sox11‐ overexpressing lentiviruses were transduced simultaneously into NPCs 24 h prior to differentiation assays. 4 μg/ml polybrene (Sigma‐Aldrich, TR‐1003) was added to media for all transductions. For rescue experiments by GSK3 inhibition, CHIR99021 was added to the differentiation media at the indicated concentrations for the entire duration of the differentiation protocol. DMSO was used as a control. For FACS analysis, NPCs were fixed in 70% ethanol for 10 min, washed three times with PBS and then stained with 40 µg/ml propidium iodide (Thermo Fisher, BMS500PI) in FACS staining buffer (0.1% Triton X‐100, 0.1% sodium citrate, in PBS) at 37°C for 30 min. Cells were then analysed according to Davidson et al , on a BD FACS Canto, and data were processed using FlowJo software. Lentivirus preparation The Ccny and Ccnyl1 shRNA plasmids were obtained from the laboratory of Arial Zeng and are described in Zeng et al , . For the Sox4 and Sox11 overexpression plasmids, full‐length Flag ‐ Sox4 and Flag‐Sox11 were excised from their respective pCS2 + plasmids (see below) and each ligated into the pLenti‐CAG‐IRES‐EGFP plasmid using BamHI and BsrGI restriction sites to produce two separate lentiviruses. EGFP was removed with this cloning strategy for both the Sox4‐ and Sox11 ‐overexpressing lentiviruses. All lentiviruses were packaged in 293T cells according to Lois et al , . Sox4 / Sox11 cloning and mutagenesis GoTaq (Promega, M7841) DNA polymerase was used to amplify full‐length Sox4 and Sox11 using cDNA obtained from whole E13.5 embryos. N‐terminal Flag tags were added via forward primers (see Appendix Table ). BamHI and XhoI restriction sites were introduced by PCR to clone into the pCS2 + plasmid, and sequencing was performed to verify the integrity of the constructs. Mutagenesis of the Sox4 / 11 GSK3 phospho‐motifs was performed by amplifying full‐length Flag‐tagged plasmids with Phusion™ DNA polymerase (NEB, M0530L) using primers designed to mutate selected serines into alanines. Extension times of 5 min and 18 cycles were carried out for each PCR. Amplification products were PCR purified (Machery Nagel, 740609.250), digested with Dpn1 restriction enzyme (NEB, R0176) for 1 h at 37°C, PCR purified again, and then transformed into electro‐competent bacteria. Colonies were screened for mutations by sequencing. Double GSK3‐motif mutations (e.g. for Sox11 ) were performed sequentially. Real‐time quantitative PCR mRNA from NPCs and embryonic dorsal forebrains was extracted using the Nucleospin RNA XS kit (Machery‐Nagel, 740902.50) and the Nucleospin RNA kit (Machery‐Nagel, 740955.250), respectively, according to the manufacturer’s instructions. Extracted mRNA was transcribed to cDNA using random hexamer primers. PCR was performed on a Roche Light Cycler 480 using the Universal ProbeLibrary system. Gapdh was used as a housekeeping gene unless otherwise stated. See Appendix Table for primer sequences. Immunoblotting Cells were adjusted to equal numbers, washed with PBS, resuspended in triton lysis buffer (20 mM Tris–HCl, 150 M NaCl, 1% Triton X‐100, 1 mM EDTA, 1 mM EGTA, 1 mM b‐glycerolphosphate, 2.5 mM sodium pyrophosphate and 1 mM sodium orthovanadate), incubated for 30min on ice and then spun down at full speed for 7 min to clear lysates. For brain tissue samples, forebrains were dissected, resuspended in triton lysis buffer and sonicated in a water bath for 15 min before being processing as described above. Lysates were heated at 70°C in NuPage LDS buffer (Thermo Fisher, NP0007) with 50 mM DTT. Samples were separated on 7.5% polyacrylamide gels, transferred to nitrocellulose and blocked with 5% skim‐milk powder or 5% BSA in Tris‐buffered saline with 0.05% Tween‐20 (TBST) for 1 h at room temperature. Primary antibodies were diluted in blocking buffer and incubated overnight at 4°C. After 3 washes in TBST, membranes were incubated with peroxidase‐linked secondary antibodies for 1h at RT. Following an additional 3 washes, membranes were treated with Supersignal West Pico solution (Thermo Scientific, 34579). Images were acquired on an LAS‐3000 system (Fuji Film). Lambda‐phosphatase treatment Cells were lysed in modified RIPA lysis buffer 50 mM HEPES pH 8.0, 300 mM NaCl, 1% Triton X‐100, 0.2% sodium deoxycholate, 0.05% SDS, 5 mM MgCl 2 supplemented with EDTA‐free protease inhibitor tablet (Pierce, A32965) for 30 min on ice, spun down full speed at 4°C for 5 min and supernatants collected. Lysates were incubated with lambda phosphatase (NEB, P0753S) for 30 or 60 min (indicated in figure legends) at 30°C according to the manufacturer’s instructions. Ubiquitination assays HA‐tagged ubiquitin plasmid was co‐transfected with Flag‐tagged Sox4 / Sox11 or PCS2 + empty vector into 293T cells in 6 cm dishes for 48 h. Cells were treated for 4 h with 20 µM MG132 (Sigma‐Aldrich, C2211) before being harvested in triton lysis buffer. A total of 200 µg protein was incubated with 20 µl FLAG beads (Sigma‐Aldrich, A2220) overnight at 4°C with rotation. Beads were washed 4x with triton lysis buffer, resuspended in 20 mM Tris–HCl pH 7.5 buffer containing 0.1% SDS and heated for 5 min at 95°C to dissociate Sox binding partners. Cells were spun down, resuspended in triton lysis buffer and incubated a second time with 20 µl FLAG beads overnight at 4°C with rotation. Following 4 additional washes, NuPage LDS buffer supplemented with 50mM DTT was added to beads and samples were boiled at 70°C for 10 min. In vitro kinase assays Ten micrograms of Sox4 / Sox11 wild‐type and mutant plasmids were transfected into 293T cells in 10‐cm dishes for 48 h. Cells were lysed in 2 ml modified RIPA lysis buffer and incubated with 20 µl FLAG beads overnight at 4°C with rotation. Cells were then washed once with modified RIPA buffer, followed by two washes with high salt wash buffer (30 mM HEPES pH 8.0, 500 mM NaCl, 5 mM MgCl 2 supplemented with EDTA‐free protease inhibitor tablet) and one wash with low salt wash buffer (30 mM HEPES pH 8.0, 500 mM NaCl, 5 mM MgCl 2 supplemented with EDTA‐free protease inhibitor tablet). Cells were then resuspended in low salt wash buffer and treated with lambda phosphatase as described above. For kinase assay, beads were washed once in wash buffer (30 mM HEPES‐KOH pH 7.7, 10 mM MgCl 2 , 0.2 mM β‐mercaptoethanol) and equally divided in two 1.5‐ml tubes. 10 nM GSK3β (Millipore, 14‐306) or reaction buffer was added to the corresponding tube, and the reaction was started by adding 50 µM ATP containing 1 µCi of radiolabeled 32 P‐γATP (Permin‐Elmer, NEG502A001MC). Phosphorylation assays were performed at 37°C for 15 min in 30 µl of kinase buffer (30 mM HEPES‐KOH pH 7.7, 10 mM MgCl 2 , 1mM DTT, 0.2% BRIJ‐35). Reactions were stopped by directly adding 10 µl of 4xSDS laemmli buffer, and samples were heated at 99°C for 7 min. 15 µl was loaded into a 10% SDS–PAGE and a 7.5% Phos‐Tag gel (Alpha laboratories, 304‐93526) followed by staining with Quick Coomassie ® (Protein Ark, GEN‐QC‐STAIN‐1L) and imaging of the dried gel with phosphorimager (Sapphire™ Biomolecular Imager, Azure Biosystems). Antibodies Rabbit polyclonal antibodies against Ccny and Ccnyl1 were raised against synthetic peptides and affinity‐purified as previously described (Davidson et al , ; Koch et al , ). No cross‐reactivity with Ccnyl1 was detected for the anti‐Ccny antibody and vice versa. The rabbit polyclonal pLRP6 T1479 antibody is described in Davidson et al ( ). The Sox4 and Sox11 polyclonal guinea pig antibodies are described in Hoser et al , . The Sox4‐phospho polyclonal rabbit antibody is described below. All other antibodies used in this study are commercial and are described in the Appendix Table . Sox4 phospho‐antibody preparation Rabbits were injected bi‐weekly with AASpPAAGRC peptide conjugated to Imject Maleimide‐Activated Blue Carrier Protein (Thermo Fisher, 77664), and serum was collected after 4 months. Rabbit antibody was purified by first passing through phosphopeptide immobilized to Sulfolink beads (Thermo Fisher, 20401) and then subtracted using non‐phosphorylated peptide. Serum from 3 rabbits was collected and purified, and the serum with the highest antibody yield was used for subsequent experiments. Rabbit housing, injections and serum collection was performed by Pineda Antikörper‐Service (Berlin, Germany). Immunofluorescence Paraffin Tissues were fixed overnight in 4% paraformaldehyde at 4°C, progressively dehydrated and embedded in paraffin. 7‐µm thick sections were rehydrated, boiled in a pressure cooker for 2 min with citrate/EDTA buffer (10 mM sodium citrate, 5 mM Tris–HCl, 2 mM EDTA, pH 8.0) and blocked in blocking buffer (PBS solution containing 10% normal donkey serum, 1% BSA and 0.1% Triton X‐100) for 30 min at room temperature. All primary antibodies were diluted in blocking buffer and applied overnight at 4°C. Secondary antibodies were diluted 1:500 in blocking buffer containing Hoechst 33258 dye (1:1,000)(Sigma‐Aldrich, 861405) to stain DNA and applied at room temperature for 1 h. For histological analysis, 7‐µm thick sections were stained with haematoxylin and eosin according to standard procedures. Frozen sections Tissues were fixed for 2 h in 4% paraformaldehyde at 4°C, incubated in 30% sucrose overnight, embedded in Tissue‐Tek (OCT, Sakura, 4583) and frozen at −20°C. Sections of 8 µm thickness were washed briefly in PBS and then heated in a microwave for 5 min in sodium citrate buffer (10mM sodium citrate, 0.05% Tween pH 6.0). Blocking and antibody applications were performed as described above. Cell culture Cells were cultured on coverslips coated with poly D‐lysine and fixed with 4% PFA at room temperature for 10 min. Following fixation cells were washed twice in PBS and then blocked and stained with indicated antibodies as described above. TUNEL staining TUNEL staining was performed with the Click‐IT Plus TUNEL assay (Thermo Fisher, C10617) according to the manufacturer’s instructions. RNAScope in situ hybridization Paraffin‐embedded forebrain sections were processed for RNA in situ using the RNAScope 2.5 HD assay‐RED kit (Advanced cell diagnostics, 322360) (Chromogenic and Fluorogenic) according to the manufacturer’s instructions. The RNAscope probe used was Axin2 (NM 015732, region 330‐1287). Quantification and statistical analysis Sample sizes (individual embryos, litter numbers and wells ( in vitro experiments)) are reported in each figure legend. All cell counts were performed in standardized microscopic fields (additional information in ) using either the Fiji cell counter plug in (quantifications done blindly) or user‐defined macros (no blinding for quantification). All statistical analyses were conducted using GraphPad Prism. Data normality was tested by Shapiro–Wilk normality test, and variances between groups were tested using F‐test. Means between two groups were compared using two‐tailed unpaired Student’s t ‐test, and means between multiple groups were compared using one‐way or two‐way analysis of variance (ANOVA) followed by Tukey’s multiple comparison tests. Statistical outliers were calculated using Grubb’s test. Results are displayed as arithmetic mean ± standard error of mean (SEM). Where indicated results are shown as fold change vs. controls. Statistically significant data are indicated as: * P < 0.05, ** P < 0.01, and *** P < 0.001. Non‐significant data are indicated as ns. Note on terminology The term "self‐renewal" has been commonly used not only for asymmetric NPC divisions that result in the maintenance of the pool size of a given neural progenitor type (e.g. 1 AP − > 1 AP + 1 BP), but also for symmetric divisions of NPCs that increase the pool size of a given NPC type (e.g. 1 AP − > 2 APs, 1 BP − > 2 BPs). To clarify matters, we refer to the latter type of cell division as "increased self‐renewal".
Mice were bred in the Central Animal Laboratory in the DKFZ, Heidelberg, under standardized hygienic conditions. All animal work was conducted according to national and international guidelines and was approved by the state review board of Baden‐Württemberg (protocol no. G‐123/18). Sperm from mice carrying a flanked by loxP (floxed) allele of cyclin Y (Ccny tm1(flox)Smoc ) was obtained from the lab of Arial Zeng (Shanghai, China; An et al , ) and used for in vitro fertilization of wild‐type C57BL/6N oocytes. Heterozygous Ccny‐flox mice were bred with transgenic animals expressing Cre recombinase under the control of the CMV promoter to achieve organism‐wide gene knockout ( Ccny KO). Generation of cyclin Y‐like 1‐deficient (Ccnyl1tm1a (EUCOMM)Wtsi /H; Ccnyl1 KO) mice has been described previously (Koch et al , ). Ccny and Ccnyl1 double knockout embryos (DKO) were generated by incrossing Ccny −/− Ccnyl1 +/− males with Ccny +/− Ccnyl1 −/− females. Ccny +/− ; Ccnyl1 +/− embryos were used as controls. Embryos were analysed at various time points (E11.5, E12.5 and E13.5), and gender was not taken into consideration. For proliferation assays, BrdU (Sigma‐Aldrich, B5002) and IdU (Sigma‐Aldrich, I0050000) were dissolved in 0.9% NaCl and administered to pregnant dams via intraperitoneal (IP) injection at a dose of 50 mg/kg. Injection strategies are indicated in the figure legends. Adult mice were sacrificed by cervical dislocation. For in utero electroporation experiments, C57BL/6J mice were bred at the Biomedical Services Facility of the MPI‐CBG under standardized hygienic conditions and all procedures were conducted in agreement with the German Animal Welfare Legislation after approval by the Landesdirektion Sachsen (licenses: mouse TVV 5/2015 and TVV 20/2020).
utero electroporation Wild‐type C57BL/6J pregnant mice carrying E13.5 embryos were anesthetized using initially 4% isoflurane (Baxter, HDG9623), followed by 2–3% isoflurane during the in utero electroporation (IUE) procedure. The animals were injected subcutaneously with the analgesic (0.1 ml of metamizol, 200 mg/kg). The peritoneal cavity was surgically opened and the uterus exposed. Using borosilicate microcapillary (Sutter instruments, BF120‐69‐10), the embryos were injected intraventricularly with a solution containing 0.1% Fast green (Sigma‐Aldrich, F7252) in sterile PBS, 2 µg/µl of pSuper plasmid (either 2 µg/µl of pSuper‐shcon or 1 µg/µl of pSuper‐shCcny and 1 µg/µl of pSuper‐shCcnyl1), 0.4 µg/µl of pCAGGS GFP. The electroporations (six 50‐msec pulses of 28V at 1 s intervals) were performed using a 3‐mm diameter electrode (BTX genetronics Inc., 45‐0052INT). After surgery, mice received Metamizol in drinking water (1.33 mg/ml). Pregnant mice were sacrificed by cervical dislocation at the indicated time points (E15.5‐E17.5), and embryonic brains were dissected, fixed in 4% PFA, overnight at 4°C and processed for cryo‐sectioning.
HEK293T (293T) cells were cultured in DMEM (Gibco, 11960‐044) supplemented with 10% v/v foetal bovine serum (Capricorn, FBS‐12A9), 2 mM glutamine (Sigma‐Aldrich, G7513) and 1% v/v penicillin/streptomycin (Sigma‐Aldrich, P0781). Cells were grown at 37°C and 10% CO 2 in a humidified chamber. Transfections were performed using XtremeGENE 9 DNA transfection reagent (Roche, 06366244001) according to the manufacturer’s instructions. Plasmids were transfected at either 200 ng/ml ( Flag‐Sox4 / Sox11 , GFP, HA‐Ubiquitin) or 400 ng/ml ( Myc‐GSK3β ) in 24‐well plates coated with poly‐D‐lysine (Sigma‐Aldrich, P6407). Where indicated, cells were treated with bromoindirubin‐3’‐oxime (BIO) (Cayman Chemical, 13123) or nocodazole (BioTrend, BN0389). Doses and duration of treatments are indicated in the figure legends. DMSO was used as a control for BIO and nocodazole treatments.
NPCs were obtained by incrossing Ccny +/− Ccnyl1 +/− animals and collecting embryos at E13.5. The forebrain cerebral cortex was dissected from individual embryos, dissociated into a single‐cell suspension by repetitive pipetting and then filtered through a 70 µm cell strainer (Corning, 431751). The resulting neurospheres were cultured in NPC media (DMEM/F12 (Invitrogen, p5780), B27 (LIFE, 1074547), glucose (Sigma‐Aldrich, s5761) hepes, progesterone (Sigma‐Aldrich, 12587010) putrescine (Sigma‐Aldrich, p7556), heparin (Sigma‐Aldrich, E4127) penicillin/streptomycin, insulin‐transferrin‐sodium selenite supplements (Roche, H3149), sodium bicarbonate (Sigma‐Aldrich, p5780) and 20 ng/ml EGF (Thermo Fisher, PHG0313) for 5–7 days before being passaged. For passaging, cells were treated with accutase (Capricorn, ACC‐1B) and then pipetted repetitively to obtain a single‐cell suspension. Cells were passaged at least two times before performing experiments. Where indicated, NPCs were treated with BIO, CHIR99021 (Millipore, 361559), or nocodazole. Doses and duration of treatments are indicated in the figure legends. For NPC differentiation assays, 2 × 10 5 /cells were plated on poly‐D‐lysine‐coated 24‐well plates for 48 h, and then, media was switched to NPC differentiation media (DMEM/F12, B27, glucose, hepes, progesterone, putrescine, heparin, penicillin/streptomycin, insulin–transferrin–sodium selenite supplements, and sodium bicarbonate). Cells were grown for 7–9 days to a confluency ˜ 80% before being harvested for RNA, protein or IF analysis. Media was replaced every 48 h. Knockdown of Ccny was performed by transducing shCcny into NPCs 24 h prior to differentiation assays. For Sox4/11 rescue experiments, Flag‐Sox4 and Flag‐Sox11‐ overexpressing lentiviruses were transduced simultaneously into NPCs 24 h prior to differentiation assays. 4 μg/ml polybrene (Sigma‐Aldrich, TR‐1003) was added to media for all transductions. For rescue experiments by GSK3 inhibition, CHIR99021 was added to the differentiation media at the indicated concentrations for the entire duration of the differentiation protocol. DMSO was used as a control. For FACS analysis, NPCs were fixed in 70% ethanol for 10 min, washed three times with PBS and then stained with 40 µg/ml propidium iodide (Thermo Fisher, BMS500PI) in FACS staining buffer (0.1% Triton X‐100, 0.1% sodium citrate, in PBS) at 37°C for 30 min. Cells were then analysed according to Davidson et al , on a BD FACS Canto, and data were processed using FlowJo software.
The Ccny and Ccnyl1 shRNA plasmids were obtained from the laboratory of Arial Zeng and are described in Zeng et al , . For the Sox4 and Sox11 overexpression plasmids, full‐length Flag ‐ Sox4 and Flag‐Sox11 were excised from their respective pCS2 + plasmids (see below) and each ligated into the pLenti‐CAG‐IRES‐EGFP plasmid using BamHI and BsrGI restriction sites to produce two separate lentiviruses. EGFP was removed with this cloning strategy for both the Sox4‐ and Sox11 ‐overexpressing lentiviruses. All lentiviruses were packaged in 293T cells according to Lois et al , .
/ Sox11 cloning and mutagenesis GoTaq (Promega, M7841) DNA polymerase was used to amplify full‐length Sox4 and Sox11 using cDNA obtained from whole E13.5 embryos. N‐terminal Flag tags were added via forward primers (see Appendix Table ). BamHI and XhoI restriction sites were introduced by PCR to clone into the pCS2 + plasmid, and sequencing was performed to verify the integrity of the constructs. Mutagenesis of the Sox4 / 11 GSK3 phospho‐motifs was performed by amplifying full‐length Flag‐tagged plasmids with Phusion™ DNA polymerase (NEB, M0530L) using primers designed to mutate selected serines into alanines. Extension times of 5 min and 18 cycles were carried out for each PCR. Amplification products were PCR purified (Machery Nagel, 740609.250), digested with Dpn1 restriction enzyme (NEB, R0176) for 1 h at 37°C, PCR purified again, and then transformed into electro‐competent bacteria. Colonies were screened for mutations by sequencing. Double GSK3‐motif mutations (e.g. for Sox11 ) were performed sequentially.
mRNA from NPCs and embryonic dorsal forebrains was extracted using the Nucleospin RNA XS kit (Machery‐Nagel, 740902.50) and the Nucleospin RNA kit (Machery‐Nagel, 740955.250), respectively, according to the manufacturer’s instructions. Extracted mRNA was transcribed to cDNA using random hexamer primers. PCR was performed on a Roche Light Cycler 480 using the Universal ProbeLibrary system. Gapdh was used as a housekeeping gene unless otherwise stated. See Appendix Table for primer sequences.
Cells were adjusted to equal numbers, washed with PBS, resuspended in triton lysis buffer (20 mM Tris–HCl, 150 M NaCl, 1% Triton X‐100, 1 mM EDTA, 1 mM EGTA, 1 mM b‐glycerolphosphate, 2.5 mM sodium pyrophosphate and 1 mM sodium orthovanadate), incubated for 30min on ice and then spun down at full speed for 7 min to clear lysates. For brain tissue samples, forebrains were dissected, resuspended in triton lysis buffer and sonicated in a water bath for 15 min before being processing as described above. Lysates were heated at 70°C in NuPage LDS buffer (Thermo Fisher, NP0007) with 50 mM DTT. Samples were separated on 7.5% polyacrylamide gels, transferred to nitrocellulose and blocked with 5% skim‐milk powder or 5% BSA in Tris‐buffered saline with 0.05% Tween‐20 (TBST) for 1 h at room temperature. Primary antibodies were diluted in blocking buffer and incubated overnight at 4°C. After 3 washes in TBST, membranes were incubated with peroxidase‐linked secondary antibodies for 1h at RT. Following an additional 3 washes, membranes were treated with Supersignal West Pico solution (Thermo Scientific, 34579). Images were acquired on an LAS‐3000 system (Fuji Film).
Cells were lysed in modified RIPA lysis buffer 50 mM HEPES pH 8.0, 300 mM NaCl, 1% Triton X‐100, 0.2% sodium deoxycholate, 0.05% SDS, 5 mM MgCl 2 supplemented with EDTA‐free protease inhibitor tablet (Pierce, A32965) for 30 min on ice, spun down full speed at 4°C for 5 min and supernatants collected. Lysates were incubated with lambda phosphatase (NEB, P0753S) for 30 or 60 min (indicated in figure legends) at 30°C according to the manufacturer’s instructions.
HA‐tagged ubiquitin plasmid was co‐transfected with Flag‐tagged Sox4 / Sox11 or PCS2 + empty vector into 293T cells in 6 cm dishes for 48 h. Cells were treated for 4 h with 20 µM MG132 (Sigma‐Aldrich, C2211) before being harvested in triton lysis buffer. A total of 200 µg protein was incubated with 20 µl FLAG beads (Sigma‐Aldrich, A2220) overnight at 4°C with rotation. Beads were washed 4x with triton lysis buffer, resuspended in 20 mM Tris–HCl pH 7.5 buffer containing 0.1% SDS and heated for 5 min at 95°C to dissociate Sox binding partners. Cells were spun down, resuspended in triton lysis buffer and incubated a second time with 20 µl FLAG beads overnight at 4°C with rotation. Following 4 additional washes, NuPage LDS buffer supplemented with 50mM DTT was added to beads and samples were boiled at 70°C for 10 min.
vitro kinase assays Ten micrograms of Sox4 / Sox11 wild‐type and mutant plasmids were transfected into 293T cells in 10‐cm dishes for 48 h. Cells were lysed in 2 ml modified RIPA lysis buffer and incubated with 20 µl FLAG beads overnight at 4°C with rotation. Cells were then washed once with modified RIPA buffer, followed by two washes with high salt wash buffer (30 mM HEPES pH 8.0, 500 mM NaCl, 5 mM MgCl 2 supplemented with EDTA‐free protease inhibitor tablet) and one wash with low salt wash buffer (30 mM HEPES pH 8.0, 500 mM NaCl, 5 mM MgCl 2 supplemented with EDTA‐free protease inhibitor tablet). Cells were then resuspended in low salt wash buffer and treated with lambda phosphatase as described above. For kinase assay, beads were washed once in wash buffer (30 mM HEPES‐KOH pH 7.7, 10 mM MgCl 2 , 0.2 mM β‐mercaptoethanol) and equally divided in two 1.5‐ml tubes. 10 nM GSK3β (Millipore, 14‐306) or reaction buffer was added to the corresponding tube, and the reaction was started by adding 50 µM ATP containing 1 µCi of radiolabeled 32 P‐γATP (Permin‐Elmer, NEG502A001MC). Phosphorylation assays were performed at 37°C for 15 min in 30 µl of kinase buffer (30 mM HEPES‐KOH pH 7.7, 10 mM MgCl 2 , 1mM DTT, 0.2% BRIJ‐35). Reactions were stopped by directly adding 10 µl of 4xSDS laemmli buffer, and samples were heated at 99°C for 7 min. 15 µl was loaded into a 10% SDS–PAGE and a 7.5% Phos‐Tag gel (Alpha laboratories, 304‐93526) followed by staining with Quick Coomassie ® (Protein Ark, GEN‐QC‐STAIN‐1L) and imaging of the dried gel with phosphorimager (Sapphire™ Biomolecular Imager, Azure Biosystems).
Rabbit polyclonal antibodies against Ccny and Ccnyl1 were raised against synthetic peptides and affinity‐purified as previously described (Davidson et al , ; Koch et al , ). No cross‐reactivity with Ccnyl1 was detected for the anti‐Ccny antibody and vice versa. The rabbit polyclonal pLRP6 T1479 antibody is described in Davidson et al ( ). The Sox4 and Sox11 polyclonal guinea pig antibodies are described in Hoser et al , . The Sox4‐phospho polyclonal rabbit antibody is described below. All other antibodies used in this study are commercial and are described in the Appendix Table .
Rabbits were injected bi‐weekly with AASpPAAGRC peptide conjugated to Imject Maleimide‐Activated Blue Carrier Protein (Thermo Fisher, 77664), and serum was collected after 4 months. Rabbit antibody was purified by first passing through phosphopeptide immobilized to Sulfolink beads (Thermo Fisher, 20401) and then subtracted using non‐phosphorylated peptide. Serum from 3 rabbits was collected and purified, and the serum with the highest antibody yield was used for subsequent experiments. Rabbit housing, injections and serum collection was performed by Pineda Antikörper‐Service (Berlin, Germany).
Paraffin Tissues were fixed overnight in 4% paraformaldehyde at 4°C, progressively dehydrated and embedded in paraffin. 7‐µm thick sections were rehydrated, boiled in a pressure cooker for 2 min with citrate/EDTA buffer (10 mM sodium citrate, 5 mM Tris–HCl, 2 mM EDTA, pH 8.0) and blocked in blocking buffer (PBS solution containing 10% normal donkey serum, 1% BSA and 0.1% Triton X‐100) for 30 min at room temperature. All primary antibodies were diluted in blocking buffer and applied overnight at 4°C. Secondary antibodies were diluted 1:500 in blocking buffer containing Hoechst 33258 dye (1:1,000)(Sigma‐Aldrich, 861405) to stain DNA and applied at room temperature for 1 h. For histological analysis, 7‐µm thick sections were stained with haematoxylin and eosin according to standard procedures. Frozen sections Tissues were fixed for 2 h in 4% paraformaldehyde at 4°C, incubated in 30% sucrose overnight, embedded in Tissue‐Tek (OCT, Sakura, 4583) and frozen at −20°C. Sections of 8 µm thickness were washed briefly in PBS and then heated in a microwave for 5 min in sodium citrate buffer (10mM sodium citrate, 0.05% Tween pH 6.0). Blocking and antibody applications were performed as described above. Cell culture Cells were cultured on coverslips coated with poly D‐lysine and fixed with 4% PFA at room temperature for 10 min. Following fixation cells were washed twice in PBS and then blocked and stained with indicated antibodies as described above.
Tissues were fixed overnight in 4% paraformaldehyde at 4°C, progressively dehydrated and embedded in paraffin. 7‐µm thick sections were rehydrated, boiled in a pressure cooker for 2 min with citrate/EDTA buffer (10 mM sodium citrate, 5 mM Tris–HCl, 2 mM EDTA, pH 8.0) and blocked in blocking buffer (PBS solution containing 10% normal donkey serum, 1% BSA and 0.1% Triton X‐100) for 30 min at room temperature. All primary antibodies were diluted in blocking buffer and applied overnight at 4°C. Secondary antibodies were diluted 1:500 in blocking buffer containing Hoechst 33258 dye (1:1,000)(Sigma‐Aldrich, 861405) to stain DNA and applied at room temperature for 1 h. For histological analysis, 7‐µm thick sections were stained with haematoxylin and eosin according to standard procedures.
Tissues were fixed for 2 h in 4% paraformaldehyde at 4°C, incubated in 30% sucrose overnight, embedded in Tissue‐Tek (OCT, Sakura, 4583) and frozen at −20°C. Sections of 8 µm thickness were washed briefly in PBS and then heated in a microwave for 5 min in sodium citrate buffer (10mM sodium citrate, 0.05% Tween pH 6.0). Blocking and antibody applications were performed as described above.
Cells were cultured on coverslips coated with poly D‐lysine and fixed with 4% PFA at room temperature for 10 min. Following fixation cells were washed twice in PBS and then blocked and stained with indicated antibodies as described above.
TUNEL staining was performed with the Click‐IT Plus TUNEL assay (Thermo Fisher, C10617) according to the manufacturer’s instructions.
in situ hybridization Paraffin‐embedded forebrain sections were processed for RNA in situ using the RNAScope 2.5 HD assay‐RED kit (Advanced cell diagnostics, 322360) (Chromogenic and Fluorogenic) according to the manufacturer’s instructions. The RNAscope probe used was Axin2 (NM 015732, region 330‐1287).
Sample sizes (individual embryos, litter numbers and wells ( in vitro experiments)) are reported in each figure legend. All cell counts were performed in standardized microscopic fields (additional information in ) using either the Fiji cell counter plug in (quantifications done blindly) or user‐defined macros (no blinding for quantification). All statistical analyses were conducted using GraphPad Prism. Data normality was tested by Shapiro–Wilk normality test, and variances between groups were tested using F‐test. Means between two groups were compared using two‐tailed unpaired Student’s t ‐test, and means between multiple groups were compared using one‐way or two‐way analysis of variance (ANOVA) followed by Tukey’s multiple comparison tests. Statistical outliers were calculated using Grubb’s test. Results are displayed as arithmetic mean ± standard error of mean (SEM). Where indicated results are shown as fold change vs. controls. Statistically significant data are indicated as: * P < 0.05, ** P < 0.01, and *** P < 0.001. Non‐significant data are indicated as ns.
The term "self‐renewal" has been commonly used not only for asymmetric NPC divisions that result in the maintenance of the pool size of a given neural progenitor type (e.g. 1 AP − > 1 AP + 1 BP), but also for symmetric divisions of NPCs that increase the pool size of a given NPC type (e.g. 1 AP − > 2 APs, 1 BP − > 2 BPs). To clarify matters, we refer to the latter type of cell division as "increased self‐renewal".
FDS and KZ conceived, performed and analysed experiments. JH supervised animal husbandry and assisted with experiments. EF performed the kinase assays. AP performed IUE experiments and cortical layer thickness measurements. MWB performed astral microtubule quantification. VV performed the RNAScope experiments, and AS helped with manuscript revision. WBH contributed by planning experiments and revising the manuscript. CN supervised all aspects of the project. FDS wrote the manuscript with input from all authors.
The authors declare that they have no conflict of interest.
Appendix Click here for additional data file. Expanded View Figures PDF Click here for additional data file. Source Data for Expanded View/Appendix Click here for additional data file. Source Data for Figure 1 Click here for additional data file. Source Data for Figure 2 Click here for additional data file. Source Data for Figure 3 Click here for additional data file. Source Data for Figure 4 Click here for additional data file. Source Data for Figure 5 Click here for additional data file. Source Data for Figure 6 Click here for additional data file.
|
Comprehensive understanding of context-specific functions of PHF2 in lipid metabolic tissues | f78274a2-02e2-4caa-8d08-9188de055df4 | 11914216 | Digestive System[mh] | Lipid metabolism includes the storage, synthesis, and degradation of lipids to support cellular energy production . Among many tissues, adipose tissues and liver act as a control tower of lipid metabolism . Adipocytes serve as the primary energy storage by accumulating large amounts of fat in specialized lipid droplets . When energy levels in the body are low, adipocytes break down stored lipids into fatty acids and glycerol, which are then released into the bloodstream for use by other tissues . The circulating lipids are transported into the hepatocytes as free fatty acids. In the liver, free fatty acids can either undergo oxidation or esterification to form triglycerides (TG). Some of these TGs are utilized for the synthesis of protein components such as very-low-density lipoprotein (VLDL) and chylomicrons, while excess TGs are stored in the liver in the form of lipid droplets . In addition, the liver synthesizes fatty acids, primarily through two transcription factors, sterol regulatory element-binding protein 1c (SREBP1c) and carbohydrate-responsive element-binding protein (ChREBP) . When there is a disruption in lipid metabolism within metabolic tissues, it can lead to an accumulation of excess lipids, which can cause various metabolic disorders such as type 2 diabetes mellitus, obesity, non-alcoholic fatty liver disease (NAFLD), steatohepatitis, and liver cancer . Therefore, it is crucial to identify the key factors that regulate lipid metabolism across different tissues. PHF2, a member of the KDM7 histone demethylase family, demethylates di- or tri-methylated lysine 9 residue in histone 3 (H3K9) and relieves gene silencing via its Jumonji C domain-dependent demethylase activity . Recent research has shown that PHF2 plays an important role in lipid metabolism. During the process of adipogenesis, PHF2 is recruited along with CCAAT-enhancer-binding protein alpha and delta (C/EBPα and C/EBPδ) to the promoter regions of their target genes such as Pparg, Cebpa, and Fabp4 . This recruitment facilitates adipocyte differentiation by demethylating H3K9me2 near C/EBPα-binding regions , . PHF2 plays different roles in lipid metabolism in hepatocytes during various stages of liver disease. In the normal liver, PHF2 promotes fatty acid uptake and de novo lipogenesis rates by coactivating ChREBP, which leads to hepatic steatosis , . However, PHF2 plays a protective role in the liver by activating NF-E2-related factor 2 (NRF2), which helps to prevent steatohepatitis . In liver cancer cells, PHF2 decisively acts as an E3 ubiquitin ligase for SREBP1c , a crucial transcription factor that regulates lipogenic genes. This action effectively suppresses de novo lipogenesis. Given that the impact of PHF2 on lipid metabolism differs between tissues and disease progression, a comprehensive understanding of the role of PHF2 is necessary. However, previous studies primarily examined PHF2 in isolated tissues and there have been experimental variations between different datasets. In this study, we comprehensively investigate the role of PHF2 in the genetic contribution to lipid metabolism in various tissues. We utilize cDNA-chip microarray data, publicly available clinical and proteomics datasets, as well as in vitro-based analyses to evaluate how PHF2 affects the regulatory machinery in different tissues. Through comparative analyses, we found that PHF2 plays an important role in enhancing adipogenesis or lipid storage in adipose tissue, as well as in early-stage liver disease by exerting epigenetic functions. Importantly, the function of PHF2 changes in late-stage liver disease. In this stage, PHF2 decreases lipid storage, lipid synthesis, the immune pathway, and disease progression. Therefore, we suggest that PHF2 could serve as a molecular checkpoint for metabolic disease stages. Our research findings may be useful in guiding the development of future treatments for metabolic disorders.
A comprehensive investigation of the roles of PHF2 during adipogenesis To investigate the role of PHF2 in adipocyte differentiation, we conducted a gene profiling analysis using a cDNA-chip microarray on 3T3-L1 cells. 3T3-L1 cells stably expressing sh-Control or sh- Phf2 were cultured with adipogenic stimuli. The chip detected a total of 31,939 genes, and the scatter and MA plots revealed significant changes in the pattern of gene expression between the selected groups (Supplementary Fig. A,B). After adipogenesis, PHF2 knockdown with adipogenic stimuli differentially increased 248 annotated genes and decreased 347 genes compared to sh-Control with a significant difference of p < 0.05 (Supplementary Fig. C). To identify the signaling pathways involved, we conducted gene ontology (GO) and Kyoto Encyclopedia of Genes and Genomes (KEGG) enrichment analyses for the downregulated genes. According to the GO analysis on biological processes, PHF2 knockdown suppressed several pathways including transcription and gene expression, differentiation of fat cells, erythrocytes, osteoblasts, embryonic placenta, and neuron development, as well as cell cycle pathways (Fig. A, left). Additionally, as per the KEGG pathway analysis, PHF2 knockdown reduced the AMPK, PPAR, and Rap1 signaling pathways, hormone regulation, Alzheimer’s and Parkinson’s diseases, non-alcoholic fatty liver disease, and neurodegeneration (Fig. A, right). To better understand the functions of PHF2, we further analyzed downregulated 347 differentially expressed genes (DEGs) using network analysis in the ClueGO application. PHF2 knockdown significantly suppressed the following pathways: myeloid, enucleate erythrocyte, and osteoblast differentiation; development of adipocyte tissue; Alzheimer’s disease; non-alcoholic fatty liver disease; the lipid metabolic pathway; the PPAR, Rap1, AMPK, and adrenergic pathways; cell proliferation and mitotic cell cycle; DNA biosynthetic process; transcription activity; and protein phosphorylation (Fig. B). These results suggest that PHF2 is positively associated with cell development, differentiation, hormonal regulation, and lipid metabolism during adipogenesis. Next, we conducted a comparative analysis using 248 genes increased in the sh- Phf2 group compared with the sh-Control group ( p < 0.05). An analysis of these upregulated genes was performed using GO and KEGG enrichment techniques. There was an increase in pathways related to tissue morphogenesis and development, DNA-templated transcription, immune signaling and response to viruses, somite specification, lipid signaling, cell migration, motility, and differentiation, and nervous development in the sh- Phf2 group compared to the sh-Control group (Supplementary Fig. A, left). Furthermore, the KEGG pathway analysis indicated an increase in genes related to focal adhesion, pathways in cancer and glioma, influenza A, NOD-like receptor, cytosolic DNA-sensing, PI3K-AKT, Rap1, and JAK-STAT signaling pathways, hepatitis B and C, and transcriptional misregulation in the sh- Phf2 group (Supplementary Fig. A, right). Based on the network analysis conducted using ClueGO application, sh- Phf2 cells were found to have high expression in various processes such as DNA-binding transcription, endochondral ossification, synaptic vesicle localization, tube morphogenesis, focal adhesion, and pattern specification (Supplementary Fig. B). This suggests that PHF2 knockdown promotes immune pathways related to response to infection and inflammation, cell structure processes such as morphogenesis, focal adhesion, and endochondral ossification, and cancer progression processes such as migration, proliferation, PI3K-AKT signaling, and transcriptional misregulation in adipocytes. Supplementary Fig. C-L presents the top five KEGG pathway maps from the low- or high-expressed DEGs in sh- Phf2 cells. Clinical roles of PHF2 in patients with obesity, NAFLD, NASH, and liver cancer To validate the biological pathways involved in PHF2 expressions, we evaluated the contribution of PHF2 in clinical samples. All publicly available NCBI GEO data were divided into PHF2 -low and -high expression groups, and gene set enrichment analyses (GSEA) were performed. The result from genes of normal-weight and obese individuals (GSE55205, n = 23) revealed that the hallmark of histone 3 lysine 9 demethylase activity, which is the originally known function of PHF2, was closely correlated with PHF2 expression levels (Fig. A). In terms of lipid metabolism, PHF2 is positively involved in lipid droplet organization and adipogenesis (Fig. A). It has been suggested that adipocyte might activate gamma delta T cells and evade immune response via lipid antigen presentation , . Thus, we validate the crosstalk between PHF2 and immune response in the dataset (GSE55205). Interestingly, PHF2 expression is positively associated with gamma delta T cell activation (Supplementary Fig. A). However, the other immune pathways including adaptive immune response, B and T cell activation, the T helper cell type 1 and 2 pathways, and natural killer cell-mediated cytotoxicity showed no significant association with PHF2 (Supplementary Fig. A). Because the liver is one of the important tissues involved in lipid metabolism , we wondered if PHF2 is also associated the hallmarks of metabolic genes in patients with liver diseases. During liver pathogenesis, PHF2 mRNA level showed a gradual loss along the following disease gradient: healthy control → steatosis → non-alcoholic steatohepatitis (NASH) and cirrhosis → hepatocellular carcinoma (HCC) (Supplementary Fig. B). Genes from patients with NAFLD and NASH (GSE89632, n = 63) showed that PHF2 is positively involved in histone methylase complex, lipid storage, and adipogenesis (Fig. B). Next, we analyzed genes from patients with severe liver damage such as cirrhosis and HCC (GSE54238, n = 56). Unexpectedly, the correlations between PHF2 and genes involved in H3K9 methylation, histone methylase complex, and liver cancer with H3K9me3 were comparable (Supplementary Fig. C). However, when we analyzed genes from patients with advanced HCC only, the genes associated with lipoprotein biosynthesis progress, lipoprotein metabolic process, and lipid metabolism pathway showed negative correlations with PHF2 expression (GSE54238, n = 13) (Fig. C). In this dataset, the hallmarks of immune effector process, cell activation involved in immune response, and chemokine signaling pathway were highly enriched in PHF2 -high expression group (Supplementary Fig. D). These results imply that PHF2 might have different functions across tissues and disease progression. In terms of oncogenic pathways, PHF2 expression showed positive correlation with the genes involved in TGF beta signaling and p53 pathway, which exhibit tumor suppressive effects (Supplementary Fig. E). PHF2 expression level showed positive correlation with downregulated genes in metastasis indicating that PHF2 acts as a tumor suppressor in liver cancer patients (Supplementary Fig. E). Functional investigation of PHF2-interacting proteins in liver cancer cells In order to investigate the role of PHF2 in severe liver disease, we analyzed publicly available mass spectrometry proteomics data (PRIDE repository #PXD044277) and observed 295 PHF2-interacting proteins. These proteins were identified through co-purification between two groups in the dataset, using immunoprecipitation with Flag or SA beads (Fig. A). Further analysis of protein–protein interaction networks revealed that PHF2 physically interacts with proteins related to several important biological processes such as lipid metabolic process (FDR = 0.001), regulation of gene transcription (FDR = 2.7 × 10 –10 ), cell cycle (FDR = 2.3 × 10 –4 ), HCC markers (FDR = 0.002), and immune system (FDR = 6.5 × 10 –59 ) (Fig. B). These findings suggest that PHF2 may have the potential to play a role in regulating liver cancer progression. The suppressive role of PHF2 in immune cell infiltration in HCC tissues Infiltration of immune cells into tumors is linked to better patient survival and predicts the effectiveness of immune therapies . In a group of 371 patients with HCC, PHF2 was found to have positive correlations with the infiltration of various immune cells, including B cell ( p = 6.4 × 10 –5 ), CD4+ T cell ( p = 3 × 10 –10 ), CD8+ T cell ( p = 7.4 × 10 –4 ), natural killer cell ( p = 1.6 × 10 –3 ), myeloid dendritic cell ( p = 5.7 × 10 –10 ), macrophage ( p = 2.6 × 10 –11 ), neutrophil ( p = 5 × 10 –11 ), T follicular helper cell ( p = 2.7 × 10 –3 ), and mast cell ( p = 9.9 × 10 –4 ) (Fig. A–J). However, PHF2 was negatively correlated with gamma delta T cell, which suppresses immune response ( p = 4.2 × 10 –3 ) (Fig. K). Additionally, the correlation between PHF2 and the infiltration levels of regulatory T cells was found to be comparable ( p = 0.77) (Fig. L). These findings suggest that PHF2 enhances the presence of immune cells within tumors and plays a role in suppressing tumor growth in HCC. Assessment of bioinformatic analyses in adipocytes Here, we comprehensively analyzed the role of PHF2 in lipid metabolism across adipose tissue. We verified the expression patterns of genes in differentiated adipocytes, using Gene Ontology (GO) annotation. In differentiated 3T3-L1 cells, PHF2 knockdown significantly reduced the expression of genes linked to lipid storage (GO:0010884), lipid droplet organization (GO:0034389), and fat cell differentiation (GO:0045600), with the exception of Cebpb . Knocking down PHF2 significantly increased the expression of genes associated with B cell (GO:0002312) and T cell activation (GO:0050870) in differentiated 3T3-L1 cells (Supplementary Fig. A). To comprehensively examine the role of PHF2, we isolated human adipose-derived stem cells (hADSCs) from adipose tissue and effectively differentiated them into adipocytes using adipogenic stimuli. Upon the depletion of PHF2, the expression of genes associated with lipid metabolism was either reduced or remained comparable in differentiated hADSCs (Fig. A). While most mRNA levels for fat cell differentiation genes decreased, CEBPB remained unchanged, and mRNA levels for immune cell activation genes increased following PHF2 knockdown (Fig. A). Overexpression of PHF2 clearly resulted in the increase of not only FITM1 and PLIN2 involved in lipid storage, but also the genes related to fat cell differentiation (Fig. B). Conversely, the mRNA level of genes which are linked to the immune cell activation pathway declined significantly (Fig. B). The analysis of lipid metabolism-related genes clearly demonstrates that depleting PHF2 in adipocytes leads to a marked reduction in lipid content, as confirmed by Oil Red O staining (Fig. A, C). On the contrary, PHF2 overexpression significantly increased lipid accumulation in adipocytes, aligning with the elevated mRNA levels of the genes involved in lipid storage and fat cell differentiation (Fig. B, D). These results firmly validate the critical role of PHF2 in lipid metabolism, as previously indicated by bioinformatics analyses (Fig. ). Assessment of bioinformatic analyses in hADSCs In fat tissue, there are undifferentiated hADSCs which are located among adipocytes. Therefore, PHF2’s functions were confirmed in undifferentiated hADSCs under PHF2 knockdown and overexpression as well. In hADSCs under PHF2 knockdown, there was a clear decrease in mRNA levels of genes associated with lipid metabolism except for FITM1 . Additionally, there was a notable reduction in the levels of CEBPA and CEBPB which are related to fat cell differentiation. Conversely, there was a pronounced increase in the expression of genes involved in B cell and T cell activation pathways (Fig. A). In hADSCs with overexpression of PHF2, there was an increase in the expression of FITM1, CEBPA , and CEBPB which are the genes related to lipid metabolism and fat cell differentiation (Fig. B). Conversely, the expression of genes associated with immune cell activation pathways decreased when PHF2 was overexpressed in hADSCs except for IL4 (Fig. B). Nile red staining confirmed a reduction in lipid content when PHF2 was knocked down in hADSCs, while lipid accumulation was observed in hADSCs with PHF2 overexpression (Fig. C,D). These results clearly demonstrate the significant impact of PHF2 on lipid metabolism in undifferentiated hADSCs, widening the rage of validated cells in adipose tissue and broadening the conclusions drawn from previous bioinformatics analyses (Fig. ). Assessment of bioinformatic analyses and the role of the E3 ligase PHF2 in liver cancer In our thorough examination of the previous bioinformatic analyses, we determined that PHF2 has markedly different effects on lipid accumulation and immune cell activation. This is clearly illustrated when contrasting the gene expression changes in human liver cancer cells with those in adipocytes and hADSCs (Figs. A, A, A). In HepG2 cells, the knockdown of PHF2 resulted in an increase in lipid metabolism-related genes, while decreasing genes associated with immune cell activation. This finding contrasts with the gene expression changes observed in adipocytes and hADSCs (Figs. A, A, A). The alterations in mRNA levels of genes linked to immune cell activation provide insights into the role of PHF2 as a regulator in the immune pathway, as suggested by bioinformatics analyses (Fig. ). The knockdown of PHF2 resulted in no significant changes in the expression of genes associated with fat cell differentiation in HepG2 cells, in stark contrast to the observable decrease observed in adipocytes and hADSCs (Figs. A, A, A). When PHF2 was knocked down in HepG2 cells, lipid accumulation increased, while the amount of lipid decreased in adipocytes. The Oil Red O staining for lipids confirmed that PHF2 has varying effects on lipid accumulation in adipocytes and liver cancer cells (Figs. C, B). Furthermore, these results suggest that PHF2 plays different roles in maintaining lipid metabolic balance across different tissues. In assessing the bioinformatic analyses of lipogenesis, a new role of PHF2 as an E3 ubiquitin ligase for SREBP1c was established in HepG2 and Hep3B cells which are liver cancer cell lines. Immunoblotting revealed that the mature form of SREBP1c increased when PHF2 was knocked down (Fig. C). To verify the role of PHF2 in regulating lipogenesis via SREBP1c, we identified adipogenic genes that are in downstream of SREBP1c. Following the knockdown of PHF2, the mRNA levels of lipogenesis-related genes increased due to the accumulation of SREBP1c. However, when SREBP1c and PHF2 were knocked down simultaneously, the mRNA levels of lipogenic genes were restored, as these genes are downstream of SREBP1c (Fig. D). Similarly, an increased amount of lipid was observed in HepG2 cells stained with Nile red, following PHF2 depletion. However, when PHF2 and SREBP1c were knocked down together, the lipid levels were found to recover (Fig. E). The results decisively validate PHF2’s role as an E3 ligase for SREBP1c, clearly demonstrating its impact on lipid metabolism in cancer cells, as previously indicated in the bioinformatics data (Fig. ). Assessment of clinical role of PHF2 in liver cancer 3D culture chips can replicate the in vivo tumor microenvironment by allowing for less hypoxic spheroid formation. These chips are designed with a PDMS bottom, which enables oxygen to penetrate effectively. It takes about one day for the cells to assemble and form spheroids, as they are seeded into 1,700 microwells. In five days, the spheroids at the bottom can be collected to evaluate their cross sections (Fig. A). Thus, a 3D culture system was utilized to culture HepG2 cells, demonstrating the tumor suppressive function of PHF2 (Fig. ). The average diameters of spheroids after 5 days were compared following the knockdown of PHF2 in HepG2 cells. The results showed that the depletion of PHF2 promoted liver cancer growth, resulting in increased diameters of the spheroids (Fig. B). Furthermore, immunofluorescence analysis of sectioned spheroids confirmed the promoted cancer growth. In the spheroids where PHF2 was knocked down, the expression level of Ki67, a well-known tumor marker, was elevated (Fig. C). In addition, the mRNA levels of proliferation-related genes increased in the groups where PHF2 was knocked down (Fig. D). Conversely, when PHF2 was overexpressed, we observed the opposite results (Fig. E–G). These findings suggest that PHF2 plays a critical role in suppressing liver cancer progression, supporting the previous bioinformatics data that indicated PHF2’s tumor-suppressive function in liver cancer (Fig. ).
To investigate the role of PHF2 in adipocyte differentiation, we conducted a gene profiling analysis using a cDNA-chip microarray on 3T3-L1 cells. 3T3-L1 cells stably expressing sh-Control or sh- Phf2 were cultured with adipogenic stimuli. The chip detected a total of 31,939 genes, and the scatter and MA plots revealed significant changes in the pattern of gene expression between the selected groups (Supplementary Fig. A,B). After adipogenesis, PHF2 knockdown with adipogenic stimuli differentially increased 248 annotated genes and decreased 347 genes compared to sh-Control with a significant difference of p < 0.05 (Supplementary Fig. C). To identify the signaling pathways involved, we conducted gene ontology (GO) and Kyoto Encyclopedia of Genes and Genomes (KEGG) enrichment analyses for the downregulated genes. According to the GO analysis on biological processes, PHF2 knockdown suppressed several pathways including transcription and gene expression, differentiation of fat cells, erythrocytes, osteoblasts, embryonic placenta, and neuron development, as well as cell cycle pathways (Fig. A, left). Additionally, as per the KEGG pathway analysis, PHF2 knockdown reduced the AMPK, PPAR, and Rap1 signaling pathways, hormone regulation, Alzheimer’s and Parkinson’s diseases, non-alcoholic fatty liver disease, and neurodegeneration (Fig. A, right). To better understand the functions of PHF2, we further analyzed downregulated 347 differentially expressed genes (DEGs) using network analysis in the ClueGO application. PHF2 knockdown significantly suppressed the following pathways: myeloid, enucleate erythrocyte, and osteoblast differentiation; development of adipocyte tissue; Alzheimer’s disease; non-alcoholic fatty liver disease; the lipid metabolic pathway; the PPAR, Rap1, AMPK, and adrenergic pathways; cell proliferation and mitotic cell cycle; DNA biosynthetic process; transcription activity; and protein phosphorylation (Fig. B). These results suggest that PHF2 is positively associated with cell development, differentiation, hormonal regulation, and lipid metabolism during adipogenesis. Next, we conducted a comparative analysis using 248 genes increased in the sh- Phf2 group compared with the sh-Control group ( p < 0.05). An analysis of these upregulated genes was performed using GO and KEGG enrichment techniques. There was an increase in pathways related to tissue morphogenesis and development, DNA-templated transcription, immune signaling and response to viruses, somite specification, lipid signaling, cell migration, motility, and differentiation, and nervous development in the sh- Phf2 group compared to the sh-Control group (Supplementary Fig. A, left). Furthermore, the KEGG pathway analysis indicated an increase in genes related to focal adhesion, pathways in cancer and glioma, influenza A, NOD-like receptor, cytosolic DNA-sensing, PI3K-AKT, Rap1, and JAK-STAT signaling pathways, hepatitis B and C, and transcriptional misregulation in the sh- Phf2 group (Supplementary Fig. A, right). Based on the network analysis conducted using ClueGO application, sh- Phf2 cells were found to have high expression in various processes such as DNA-binding transcription, endochondral ossification, synaptic vesicle localization, tube morphogenesis, focal adhesion, and pattern specification (Supplementary Fig. B). This suggests that PHF2 knockdown promotes immune pathways related to response to infection and inflammation, cell structure processes such as morphogenesis, focal adhesion, and endochondral ossification, and cancer progression processes such as migration, proliferation, PI3K-AKT signaling, and transcriptional misregulation in adipocytes. Supplementary Fig. C-L presents the top five KEGG pathway maps from the low- or high-expressed DEGs in sh- Phf2 cells.
To validate the biological pathways involved in PHF2 expressions, we evaluated the contribution of PHF2 in clinical samples. All publicly available NCBI GEO data were divided into PHF2 -low and -high expression groups, and gene set enrichment analyses (GSEA) were performed. The result from genes of normal-weight and obese individuals (GSE55205, n = 23) revealed that the hallmark of histone 3 lysine 9 demethylase activity, which is the originally known function of PHF2, was closely correlated with PHF2 expression levels (Fig. A). In terms of lipid metabolism, PHF2 is positively involved in lipid droplet organization and adipogenesis (Fig. A). It has been suggested that adipocyte might activate gamma delta T cells and evade immune response via lipid antigen presentation , . Thus, we validate the crosstalk between PHF2 and immune response in the dataset (GSE55205). Interestingly, PHF2 expression is positively associated with gamma delta T cell activation (Supplementary Fig. A). However, the other immune pathways including adaptive immune response, B and T cell activation, the T helper cell type 1 and 2 pathways, and natural killer cell-mediated cytotoxicity showed no significant association with PHF2 (Supplementary Fig. A). Because the liver is one of the important tissues involved in lipid metabolism , we wondered if PHF2 is also associated the hallmarks of metabolic genes in patients with liver diseases. During liver pathogenesis, PHF2 mRNA level showed a gradual loss along the following disease gradient: healthy control → steatosis → non-alcoholic steatohepatitis (NASH) and cirrhosis → hepatocellular carcinoma (HCC) (Supplementary Fig. B). Genes from patients with NAFLD and NASH (GSE89632, n = 63) showed that PHF2 is positively involved in histone methylase complex, lipid storage, and adipogenesis (Fig. B). Next, we analyzed genes from patients with severe liver damage such as cirrhosis and HCC (GSE54238, n = 56). Unexpectedly, the correlations between PHF2 and genes involved in H3K9 methylation, histone methylase complex, and liver cancer with H3K9me3 were comparable (Supplementary Fig. C). However, when we analyzed genes from patients with advanced HCC only, the genes associated with lipoprotein biosynthesis progress, lipoprotein metabolic process, and lipid metabolism pathway showed negative correlations with PHF2 expression (GSE54238, n = 13) (Fig. C). In this dataset, the hallmarks of immune effector process, cell activation involved in immune response, and chemokine signaling pathway were highly enriched in PHF2 -high expression group (Supplementary Fig. D). These results imply that PHF2 might have different functions across tissues and disease progression. In terms of oncogenic pathways, PHF2 expression showed positive correlation with the genes involved in TGF beta signaling and p53 pathway, which exhibit tumor suppressive effects (Supplementary Fig. E). PHF2 expression level showed positive correlation with downregulated genes in metastasis indicating that PHF2 acts as a tumor suppressor in liver cancer patients (Supplementary Fig. E).
In order to investigate the role of PHF2 in severe liver disease, we analyzed publicly available mass spectrometry proteomics data (PRIDE repository #PXD044277) and observed 295 PHF2-interacting proteins. These proteins were identified through co-purification between two groups in the dataset, using immunoprecipitation with Flag or SA beads (Fig. A). Further analysis of protein–protein interaction networks revealed that PHF2 physically interacts with proteins related to several important biological processes such as lipid metabolic process (FDR = 0.001), regulation of gene transcription (FDR = 2.7 × 10 –10 ), cell cycle (FDR = 2.3 × 10 –4 ), HCC markers (FDR = 0.002), and immune system (FDR = 6.5 × 10 –59 ) (Fig. B). These findings suggest that PHF2 may have the potential to play a role in regulating liver cancer progression.
Infiltration of immune cells into tumors is linked to better patient survival and predicts the effectiveness of immune therapies . In a group of 371 patients with HCC, PHF2 was found to have positive correlations with the infiltration of various immune cells, including B cell ( p = 6.4 × 10 –5 ), CD4+ T cell ( p = 3 × 10 –10 ), CD8+ T cell ( p = 7.4 × 10 –4 ), natural killer cell ( p = 1.6 × 10 –3 ), myeloid dendritic cell ( p = 5.7 × 10 –10 ), macrophage ( p = 2.6 × 10 –11 ), neutrophil ( p = 5 × 10 –11 ), T follicular helper cell ( p = 2.7 × 10 –3 ), and mast cell ( p = 9.9 × 10 –4 ) (Fig. A–J). However, PHF2 was negatively correlated with gamma delta T cell, which suppresses immune response ( p = 4.2 × 10 –3 ) (Fig. K). Additionally, the correlation between PHF2 and the infiltration levels of regulatory T cells was found to be comparable ( p = 0.77) (Fig. L). These findings suggest that PHF2 enhances the presence of immune cells within tumors and plays a role in suppressing tumor growth in HCC.
Here, we comprehensively analyzed the role of PHF2 in lipid metabolism across adipose tissue. We verified the expression patterns of genes in differentiated adipocytes, using Gene Ontology (GO) annotation. In differentiated 3T3-L1 cells, PHF2 knockdown significantly reduced the expression of genes linked to lipid storage (GO:0010884), lipid droplet organization (GO:0034389), and fat cell differentiation (GO:0045600), with the exception of Cebpb . Knocking down PHF2 significantly increased the expression of genes associated with B cell (GO:0002312) and T cell activation (GO:0050870) in differentiated 3T3-L1 cells (Supplementary Fig. A). To comprehensively examine the role of PHF2, we isolated human adipose-derived stem cells (hADSCs) from adipose tissue and effectively differentiated them into adipocytes using adipogenic stimuli. Upon the depletion of PHF2, the expression of genes associated with lipid metabolism was either reduced or remained comparable in differentiated hADSCs (Fig. A). While most mRNA levels for fat cell differentiation genes decreased, CEBPB remained unchanged, and mRNA levels for immune cell activation genes increased following PHF2 knockdown (Fig. A). Overexpression of PHF2 clearly resulted in the increase of not only FITM1 and PLIN2 involved in lipid storage, but also the genes related to fat cell differentiation (Fig. B). Conversely, the mRNA level of genes which are linked to the immune cell activation pathway declined significantly (Fig. B). The analysis of lipid metabolism-related genes clearly demonstrates that depleting PHF2 in adipocytes leads to a marked reduction in lipid content, as confirmed by Oil Red O staining (Fig. A, C). On the contrary, PHF2 overexpression significantly increased lipid accumulation in adipocytes, aligning with the elevated mRNA levels of the genes involved in lipid storage and fat cell differentiation (Fig. B, D). These results firmly validate the critical role of PHF2 in lipid metabolism, as previously indicated by bioinformatics analyses (Fig. ).
In fat tissue, there are undifferentiated hADSCs which are located among adipocytes. Therefore, PHF2’s functions were confirmed in undifferentiated hADSCs under PHF2 knockdown and overexpression as well. In hADSCs under PHF2 knockdown, there was a clear decrease in mRNA levels of genes associated with lipid metabolism except for FITM1 . Additionally, there was a notable reduction in the levels of CEBPA and CEBPB which are related to fat cell differentiation. Conversely, there was a pronounced increase in the expression of genes involved in B cell and T cell activation pathways (Fig. A). In hADSCs with overexpression of PHF2, there was an increase in the expression of FITM1, CEBPA , and CEBPB which are the genes related to lipid metabolism and fat cell differentiation (Fig. B). Conversely, the expression of genes associated with immune cell activation pathways decreased when PHF2 was overexpressed in hADSCs except for IL4 (Fig. B). Nile red staining confirmed a reduction in lipid content when PHF2 was knocked down in hADSCs, while lipid accumulation was observed in hADSCs with PHF2 overexpression (Fig. C,D). These results clearly demonstrate the significant impact of PHF2 on lipid metabolism in undifferentiated hADSCs, widening the rage of validated cells in adipose tissue and broadening the conclusions drawn from previous bioinformatics analyses (Fig. ).
In our thorough examination of the previous bioinformatic analyses, we determined that PHF2 has markedly different effects on lipid accumulation and immune cell activation. This is clearly illustrated when contrasting the gene expression changes in human liver cancer cells with those in adipocytes and hADSCs (Figs. A, A, A). In HepG2 cells, the knockdown of PHF2 resulted in an increase in lipid metabolism-related genes, while decreasing genes associated with immune cell activation. This finding contrasts with the gene expression changes observed in adipocytes and hADSCs (Figs. A, A, A). The alterations in mRNA levels of genes linked to immune cell activation provide insights into the role of PHF2 as a regulator in the immune pathway, as suggested by bioinformatics analyses (Fig. ). The knockdown of PHF2 resulted in no significant changes in the expression of genes associated with fat cell differentiation in HepG2 cells, in stark contrast to the observable decrease observed in adipocytes and hADSCs (Figs. A, A, A). When PHF2 was knocked down in HepG2 cells, lipid accumulation increased, while the amount of lipid decreased in adipocytes. The Oil Red O staining for lipids confirmed that PHF2 has varying effects on lipid accumulation in adipocytes and liver cancer cells (Figs. C, B). Furthermore, these results suggest that PHF2 plays different roles in maintaining lipid metabolic balance across different tissues. In assessing the bioinformatic analyses of lipogenesis, a new role of PHF2 as an E3 ubiquitin ligase for SREBP1c was established in HepG2 and Hep3B cells which are liver cancer cell lines. Immunoblotting revealed that the mature form of SREBP1c increased when PHF2 was knocked down (Fig. C). To verify the role of PHF2 in regulating lipogenesis via SREBP1c, we identified adipogenic genes that are in downstream of SREBP1c. Following the knockdown of PHF2, the mRNA levels of lipogenesis-related genes increased due to the accumulation of SREBP1c. However, when SREBP1c and PHF2 were knocked down simultaneously, the mRNA levels of lipogenic genes were restored, as these genes are downstream of SREBP1c (Fig. D). Similarly, an increased amount of lipid was observed in HepG2 cells stained with Nile red, following PHF2 depletion. However, when PHF2 and SREBP1c were knocked down together, the lipid levels were found to recover (Fig. E). The results decisively validate PHF2’s role as an E3 ligase for SREBP1c, clearly demonstrating its impact on lipid metabolism in cancer cells, as previously indicated in the bioinformatics data (Fig. ).
3D culture chips can replicate the in vivo tumor microenvironment by allowing for less hypoxic spheroid formation. These chips are designed with a PDMS bottom, which enables oxygen to penetrate effectively. It takes about one day for the cells to assemble and form spheroids, as they are seeded into 1,700 microwells. In five days, the spheroids at the bottom can be collected to evaluate their cross sections (Fig. A). Thus, a 3D culture system was utilized to culture HepG2 cells, demonstrating the tumor suppressive function of PHF2 (Fig. ). The average diameters of spheroids after 5 days were compared following the knockdown of PHF2 in HepG2 cells. The results showed that the depletion of PHF2 promoted liver cancer growth, resulting in increased diameters of the spheroids (Fig. B). Furthermore, immunofluorescence analysis of sectioned spheroids confirmed the promoted cancer growth. In the spheroids where PHF2 was knocked down, the expression level of Ki67, a well-known tumor marker, was elevated (Fig. C). In addition, the mRNA levels of proliferation-related genes increased in the groups where PHF2 was knocked down (Fig. D). Conversely, when PHF2 was overexpressed, we observed the opposite results (Fig. E–G). These findings suggest that PHF2 plays a critical role in suppressing liver cancer progression, supporting the previous bioinformatics data that indicated PHF2’s tumor-suppressive function in liver cancer (Fig. ).
The lipid metabolic pathway defects can lead to various health risks like obesity, diabetes mellitus, and fatty liver disease. Obesity refers to the accumulation of excessive lipids in adipose tissue and increase of the size and number of fat cells in the body . The excess amounts of macronutrients in adipose tissue increase the risk of several diseases. This leads to insulin resistance, hyperinsulinemia, diabetes, and glucose intolerance . Studies have shown that the development of adipose tissue is dependent on the activity of certain transcription factors, including members of PPAR and C/EBP families. During early adipocyte differentiation, the expression of C/EBPβ and C/EBPδ is rapidly induced, followed by the activation of PPARγ and C/EBPα . This cascade supports adipocyte differentiation by enhancing the expression of genes such as PPARγ and C/EBPα. . In addition, a metabolic syndrome can lead to various liver diseases, ranging from simple steatosis to NASH, cirrhosis, and liver cancer. One of the major liver conditions associated with metabolic syndrome is NAFLD, where more than 5% of the liver cells accumulate lipids , increasing the risk of other liver diseases . Two transcription factors of SREBP1 and ChREBP , are responsible for de novo lipogenesis in the liver. They regulate the expression of lipogenic genes like FASN, ACC, SCD1 , and ELOVL6 , which increase lipid composition in the liver . PHF2 affects cellular metabolism and differentiation not only in adipocytes and hADSC but also in the liver. It enhances the transcriptional activities of C/EBPα, C/EBPδ, and PPARγ , , playing a crucial role in the process of adipogenesis. In this study, we have observed that PHF2 knockdown did not change the levels of CEBPB in both differentiated 3T3-L1 and adipocytes (Fig. A and Supplementary Fig. A). As per a previous publication, C/EBPβ is induced during early adipocyte differentiation and its level gradually decreases over time . Thus, the levels of CEBPB can be comparable in fully differentiated adipocytes. However, under PHF2 overexpression, the mRNA level of CEBPB increased as the induced level of CEBPB became detectable during adipocyte differentiation. This increase can be attributed to the more significant change in PHF2 expression under overexpression, due to higher transfection efficiency, compared to the expression change observed under PHF2 knockdown (Fig. B). Similarly, the mRNA level of PLIN2 increased when PHF2 was overexpressed but it was not changed when PHF2 was knocked down. It can be considered that the result is also due to the bigger change of PHF2 expression under overexpression than knockdown (Fig. A,B). In case of undifferentiated hADSC, the lipid metabolism-related genes affected by PHF2 overexpression and knockdown were different from each other. This suggests that the types and degrees of pathways affected by PHF2 in hADSCs may vary depending on the expression level of PHF2. Also, the unexpected change of PRDM16 mRNA level was opposite to that of CEBPA and CEBPB , suggesting the possible existence of other pathways regulated by PHF2 in an opposing manner in undifferentiated hADSCs. Further studies about these results are demanded (Fig. A,B). In hepatocytes, PHF2 enhances fatty acid uptake and de novo lipogenesis rates via activating ChREBP. Furthermore, PHF2 transgenic mice exhibit increased hepatic glutamate and succinate-driven mitochondrial respiration . There has been no significant impact on glucose or insulin tolerance rates observed in PHF2-knockout mice . In human adipocytes and hADSCs, it was found that PHF2 knockdown led to a slight decrease in genes associated with lipid storage and lipid droplet organization, with the exception of a few genes (Figs. A and A). It is interesting to note that the genes associated with lipid storage and lipid droplet organization significantly decreased in mouse adipocytes, but increased in HepG2 cells (Fig. A and Supplementary Fig. A). These findings suggest that the impact of PHF2 may vary between different tissues and species. Since PHF2 behaves slightly differently in human adipocytes and hADSCs, despite both being part of adipose tissue, further research is needed to understand its various functions in the interplay between PHF2 and lipid metabolism across different cells and species. According to Fig. A, PHF2 has negative correlation with lipid metabolism-related genes. Furthermore, PHF2 has been found to have E3 ubiquitin ligase activity towards SREBP1c which is an upstream of the genes involved in de novo fatty acid synthesis. Thus, PHF2 can control the protein level of SREBP1c, regulating the lipid accumulation in liver cancer cells . In this study, it was validated by performing western blotting, RT-qPCR, and Nile red staining after knocking down PHF2 and SREBP1c in HepG2 cells (Fig. C–E). The lipogenesis-related genes and lipid accumulation increased under PHF2 knockdown, but they were reversed when PHF2 was knocked down alongside SREBP1c, suggesting that PHF2 can regulate lipid metabolism through its E3 ligase activity targeting SREBP1c. PHF2 suppresses tumor growth in various tumors, including bladder, esophageal, head and neck, and prostate cancers , . Additionally, low PHF2 expression was found to be significantly associated with aggressive metastasis, high Ki67 expression levels, and poor survival rates in breast and renal cancer patients , . In colon cancer, PHF2 acts as an epigenetic coactivator of p53 playing an essential role in p53 signaling pathway . To further confirm the tumor suppressive function of PHF2 in liver cancer, this study identified the expression of Ki67 and proliferation-related genes with 3D culture chips, which replicate the in vivo tumor microenvironment, thereby supporting the bioinformatics analyses (Figs. and ). Because an in vitro assay is only a small part of the whole in vivo systems, it may be beneficial to elucidate the regulatory mechanism of PHF2 both in vivo and in vitro systems, which could help us gain better insight into its precise functions during the progression of disease across different tissues. In addition, culturing liver cancer cells in 3D culture chips effectively replicates the in vivo tumor microenvironment. This method unequivocally demonstrates the direct influence of PHF2 on liver cancer progression within an environment that closely mirrors physiological conditions (Fig. ). The effect of PHF2 on immunity is still controversial and depends on the context. For instance, PHF2 targets and eliminates H4K20me3 on TLR4-responsive promoters, which enhances the expression of inflammatory genes such as Tnf and Cxcl10 in murine macrophage cells. On the other hand, PHF2 promotes H3K9me2 demethylation, which enhances the expression of Nrf2, a major transcription factor involved in defense against oxidative stress . As a result, PHF2 plays a crucial role in safeguarding the liver from inflammation, oxidative stress, and fibrosis. Moreover, our research indicates that PHF2 boosts the immune pathways in HCC patients and liver cancer cells (Figs. and A). In the case of patients with obesity, PHF2 exerts no significant effects on the activation of immune pathways except that of gamma delta T cells (Supplementary Fig. A). Meanwhile, PHF2 knockdown increases genes related to immune cell activation in differentiated adipocytes from human and mice, and hADSCs (Figs. A, A and Supplementary Fig. A). It is an interesting finding that PHF2 suppresses immune cell activation-related genes in adipocytes and hADSCs, while it upregulates the genes in HepG2 cells. Further research is needed on the opposing regulation of PHF2 according to each cell type. Our research sheds light on the different roles of PHF2 in metabolic tissues. We have carried out comprehensive validation using bioinformatics, genomics, proteomics, and functional studies, which reveal that PHF2 enhances the accumulation of lipids or adipogenesis in adipose tissue through its canonical function as a transcriptional enhancer. However, in late-stage liver disease, PHF2 functions as a negative regulator of lipid metabolism through its non-canonical role as an E3 ligase for SREBP1c. Moreover, PHF2 behaves like a tumor suppressor in HCC tissues by enhancing immune cell infiltration. This comprehensive overview about tissue-specific functions of PHF2 is summarized in Table . Our study not only confirms the previously suggested role of PHF2 in adipogenesis and liver disease progression but also proposes PHF2 as a molecular checkpoint in metabolic disease stages. Therefore, developing therapeutics targeting PHF2 could be a promising way to control disease progression.
Cell lines HEK293T cells (human embryonic kidney, No. CRL-3216) were obtained from the American Type Culture Collection (Manassas, VA, USA). HepG2 and Hep3B cells (human hepatocellular carcinoma, No. 88065) were obtained from the Korea Cell Line Bank (Seoul, Republic of Korea). 3T3-L1 preadipocytes were kindly gifted by Dr. Jae-Woo Kim (Yonsei University, Seoul, Republic of Korea). Cells were maintained in Dulbecco’s modified Eagle’s medium (DMEM), supplemented with 10% fetal bovine serum (FBS) and 100 units/mL of penicillin and 0.1 mg/mL streptomycin. Cells were incubated at 37 °C and 5% CO 2 . Isolation of human adipose-derived stem cells This study was approved by the Institutional Review Board of Seoul National University Hospital (approval No. H-1506-136-683). All experiments were performed in accordance with the guidelines of the Institutional Review Board of Seoul National University Hospital. Human adipose-derived stem cells (hADSCs) were isolated and tissues were chopped and digested in Hanks’ balanced salt solution (Sigma-Aldrich, St. Louis, MO, USA) with 0.2% collagenase type 1 (Worthington Biochemical Corporation, Lakewood, NJ, USA). After the inactivation of collagenase activity, the cell suspension was filtered through a 40 μm cell strainer (BD Biosciences, San Jose, CA, USA). After centrifuging at 420× g for 5 min, floating adipocytes were removed and only stromal vascular fraction cells were collected. Short hairpin RNAs (shRNAs) and transduction For gene silencing, the pLKO.1-puro vector was purchased from Sigma-Aldrich. Oligonucleotides targeting PHF2 were inserted into the vector using AgeI and EcoRI restriction enzymes. The viral vector was co-transfected with the pMD2-VSVG, pRSV-RRE, and pMDLg/pRRE helper plasmids into 80% confluent HEK293T cells, and the viral supernatant was collected. Plasmids were transfected into cells using Lipofectamine 2000 (11668–019, Invitrogen) according to the manufacturer’s instructions. 3T3-L1 cells were infected overnight with the virus in the presence of 6 μg/ml polybrene (sc-134220, Santa Cruz). Cells stably expressing viral vectors were selected using 5 μg/ml of puromycin (P8833, Sigma-Aldrich). To perform cDNA microarray analysis, we collected single colonies ( n > 3) from heterogeneous clones in each group and subcultured them together. The sequences of sh-RNAs are listed in Supplementary Table . Small interfering RNAs (siRNAs) and plasmid transfection All siRNAs were synthesized by Integrated DNA Technologies (Coralville, IA). Cells were transfected with siRNAs using Lipofectamine RNAiMAX according to the manufacturer’s instructions (Thermo Fisher Scientific, Newark, DE). The sequences of siRNAs are listed in Supplementary Table . The plasmid encoding PHF2 protein was used for overexpression of PHF2. According to the manufacturer’s instructions, the plasmid was transfected into cells, using Lipofectamine 2000 (Thermo Fisher Scientific, Newark, DE). Adipocyte differentiation 3T3-L1 cells stably expressing sh-Control and sh-PHF2 were differentiated with 1 μM dexamethasone (D17856, Sigma-Aldrich), 500 μM 3-isobutyl-1-methylxanthine (I5879, Sigma-Aldrich), and 10 μg/ml insulin (I9278, Sigma-Aldrich). After four days of treatment, the cells were maintained in DMEM with 10 μg/ml insulin; the media were changed every other day. After eight days of differentiation, cells were harvested for microarray analysis. For hADSC differentiation, cells were incubated with1 μM dexamethasone, 500 μM 3-isobutyl-1-methylxanthine, 5 μg/ml insulin, and 200 μM indomethacin (Sigma-Aldrich). Cells were incubated for 14 d and the media were changed every two days. Microarray analysis After differentiation, total RNA was extracted from stable 3T3-L1 cells. Gene expression in samples was analyzed using DNA microarrays in a duplicate manner (each group contained n > 3 single colonies in one experiment). The synthesis of target cDNA probes and hybridization were performed using Agilent’s Low RNA Input Linear Amplification Kit (Agilent Technology, Santa Clara, CA, USA) according to the manufacturer’s instructions. The hybridized images were scanned using Agilent’s DNA microarray scanner and quantified using Feature Extraction Software (Agilent Technology). Data analysis of genes from cDNA microarray All data normalization and selection of fold-changed genes were performed using GeneSpringGX 7.3 (Agilent Technology). The averages of the normalized ratios were calculated by dividing the average of the normalized signal channel intensity by the average of the normalized control channel intensity. An unpaired t-test was used to determine differences in gene expression between the sh-Control and sh- Phf2 groups after adipogenesis. The significant difference between the duplicate samples was assessed based on previous studies , . p < 0.05 and fold change > 1.5 or < − 1.5 were defined as the cut-off criteria. The DEGs that could not be annotated with gene symbols were excluded. Scatter and MA plots were used to compare two unrelated groups. Gene ontology (GO) functional enrichment analysis Genes differentially expressed between the sh- Phf2 nd sh-Control groups with adipogenesis ( p < 0.05) were used to functionally annotate the genes. Cytoscape 3.8.0 software and plug-in ClueGO were employed to annotate the GO functions of genes. The enrichment of the GO annotation function mainly included three aspects: biological processes, KEGG pathways, and wikipathways. GO functional enrichment was selected based on statistically significant differences ( p < 0.05). The circle represents genes that have different expression levels and are enriched in functional pathways. Each color represents a different pathway. KEGG pathway maps Genes differentially expressed between the sh- Phf2 and sh-Control groups with adipogenic stimuli ( p < 0.05) were plotted using the KEGG database ( https://www.genome.jp/kegg/pathway.html ) and presented by KEGG mapping tools ( https://www.genome.jp/kegg/mapper/color.html ). The found objects are marked in blue or red. Gene set enrichment analysis A computational method, GSEA, was used to determine statistically significant differences between the low- and high-PHF2 expression groups. A formatted GCT file was used as the input for GSEA algorithm v2.0 ( http://www.broadinstitute.org/gsea ). A false discovery rate (FDR) q-value of less than 0.25 was considered statistically significant. Immune infiltration analysis Tumor Immune Estimation Resource 2.0 (TIMER2, http://timer.cistrome.org/ ) was used to plot Spearman’s correlation coefficient between PHF2 expression and immune infiltration. The proportions of infiltrating immune cells were inferred using the CIBERSORT or TIMER algorithm. RT-qPCR After isolating total RNAs using TRIzol reagent (Invitrogen, Carlsbad, CA), cDNA was synthesized using an EasyScript cDNA Synthesis Kit (Applied Biological Materials, Richmond, BC, Canada). The cDNAs were amplified with BlasTaq™ 2X qPCR MasterMix reagent (Applied Biological Materials) using a StepOne Real-time PCR System. The sequences of primers are summarized in Supplementary Table . Oil red O staining After washing with phosphate-buffered saline (PBS), cells were fixed with 3.7% formaldehyde in PBS for 10 or 30 min and dehydrated with 60% isopropanol. Cells were stained with 0.5% Oil Red O (O0625, Sigma-Aldrich) for 1 h at room temperature. Next, the cells were washed with 60% isopropanol until the background was clear. For quantification, lipid accumulation in cells was quantified by eluting Oil Red O stain using 100% isopropanol, and optical density (OD) was measured at a wavelength of 500 nm. Nile red staining After washing with PBS, cells were fixed with 4% paraformaldehyde (PFA) for 10 min at room temperature. After fixation, each well of samples were incubated with Nile red (1 mg/ml) for 20 min at room temperature. The cells were stained with 4′,6-diamidino-2-phenylinodle (DAPI) for 1 min subsequently. 3D culture and immunofluorescence for sectioned spheroid The 5 × 10 5 HepG2 cells were seeded in each oxygen permeable PDMS plate after coated with 4% fluronic for 24 h. The cells were incubated in the plates for 5 days and average diameters were analyzed by Image J. Immunofluorescence staining was performed to the frozen sections of HepG2 3D spheroids. Spheroids collected from each plate were fixed with 4% paraformaldehyde for 30 min at 4 °C and washed three times with PBS. Then the spheroids were immersed into 10, 20, and 30% sucrose for 1 h, respectively. After being embedded into OCT compound (Sakura Finetek, Tokyo, Japan) and stored at − 80 °C for 24 h, the OCT blocks were cut into 10 μm thickness on a glass slide. The sectioned spheroids were washed with PBS. and incubated with 1% BSA solution for 1 h. After the blocking step, they were incubated for overnight with Ki-67 primary antibody (1:200 dilution). After washing three times with 0.1% Tween-20 in PBS solution (PBST), they were incubated with secondary antibody (Alexa Flour 568, anti-rabbit, 1:400 dilution) for 1 h at room temperature. After that, spheroids were washed with PBST three times, and the nuclei were stained with DAPI (1:400 dilution) for 5 min before the immunofluorescence images were observed. Statistical methods All data were analyzed using Microsoft Excel 2010 (Microsoft), and results are expressed as the mean ± standard deviation (SD). The two-tailed unpaired Student’s t-test was used to compare the means of the two groups. CIBERSORT (DOI: (10.1038/nmeth.3337) and TIMER (10.1186/s13059-016-1028-7)) in the R package were used to evaluate the immune infiltration patterns. In all analyses, * p < 0.05 was taken to indicate statistical significance. Western blotting Cells were lysed in a 2× sodium dodecyl sulfate (SDS) sample buffer. The lysate was separated on SDS–polyacrylamide gels and transferred to Immobilon-P membranes (Millipore, Billerica, MA, USA). After cutting the membranes around the size of proteins, they were blocked with a 5% skim milk dissolved in Tris-saline solution containing 0.1% Tween 20 (TBST) for 1 h and incubated with a primary antibody overnight at 4 °C (Mouse monoclonal anti-SREBP1; BD Biosciences 557036 (1:500 dilution), Rabbit polyclonal anti-PHF2; Cell Signaling D45A2 (1:1,000 dilution), Mouse monoclonal anti-Tubulin; Cell Signaling 2146S (1:10,000 dilution)). After brief washing with TBST, the membranes were incubated with a horseradish peroxidase-conjugated secondary antibody for 1 h and visualized using an ECL Plus kit (Thermo Fisher Scientific, Waltham, MA, USA).
HEK293T cells (human embryonic kidney, No. CRL-3216) were obtained from the American Type Culture Collection (Manassas, VA, USA). HepG2 and Hep3B cells (human hepatocellular carcinoma, No. 88065) were obtained from the Korea Cell Line Bank (Seoul, Republic of Korea). 3T3-L1 preadipocytes were kindly gifted by Dr. Jae-Woo Kim (Yonsei University, Seoul, Republic of Korea). Cells were maintained in Dulbecco’s modified Eagle’s medium (DMEM), supplemented with 10% fetal bovine serum (FBS) and 100 units/mL of penicillin and 0.1 mg/mL streptomycin. Cells were incubated at 37 °C and 5% CO 2 .
This study was approved by the Institutional Review Board of Seoul National University Hospital (approval No. H-1506-136-683). All experiments were performed in accordance with the guidelines of the Institutional Review Board of Seoul National University Hospital. Human adipose-derived stem cells (hADSCs) were isolated and tissues were chopped and digested in Hanks’ balanced salt solution (Sigma-Aldrich, St. Louis, MO, USA) with 0.2% collagenase type 1 (Worthington Biochemical Corporation, Lakewood, NJ, USA). After the inactivation of collagenase activity, the cell suspension was filtered through a 40 μm cell strainer (BD Biosciences, San Jose, CA, USA). After centrifuging at 420× g for 5 min, floating adipocytes were removed and only stromal vascular fraction cells were collected.
For gene silencing, the pLKO.1-puro vector was purchased from Sigma-Aldrich. Oligonucleotides targeting PHF2 were inserted into the vector using AgeI and EcoRI restriction enzymes. The viral vector was co-transfected with the pMD2-VSVG, pRSV-RRE, and pMDLg/pRRE helper plasmids into 80% confluent HEK293T cells, and the viral supernatant was collected. Plasmids were transfected into cells using Lipofectamine 2000 (11668–019, Invitrogen) according to the manufacturer’s instructions. 3T3-L1 cells were infected overnight with the virus in the presence of 6 μg/ml polybrene (sc-134220, Santa Cruz). Cells stably expressing viral vectors were selected using 5 μg/ml of puromycin (P8833, Sigma-Aldrich). To perform cDNA microarray analysis, we collected single colonies ( n > 3) from heterogeneous clones in each group and subcultured them together. The sequences of sh-RNAs are listed in Supplementary Table .
All siRNAs were synthesized by Integrated DNA Technologies (Coralville, IA). Cells were transfected with siRNAs using Lipofectamine RNAiMAX according to the manufacturer’s instructions (Thermo Fisher Scientific, Newark, DE). The sequences of siRNAs are listed in Supplementary Table . The plasmid encoding PHF2 protein was used for overexpression of PHF2. According to the manufacturer’s instructions, the plasmid was transfected into cells, using Lipofectamine 2000 (Thermo Fisher Scientific, Newark, DE).
3T3-L1 cells stably expressing sh-Control and sh-PHF2 were differentiated with 1 μM dexamethasone (D17856, Sigma-Aldrich), 500 μM 3-isobutyl-1-methylxanthine (I5879, Sigma-Aldrich), and 10 μg/ml insulin (I9278, Sigma-Aldrich). After four days of treatment, the cells were maintained in DMEM with 10 μg/ml insulin; the media were changed every other day. After eight days of differentiation, cells were harvested for microarray analysis. For hADSC differentiation, cells were incubated with1 μM dexamethasone, 500 μM 3-isobutyl-1-methylxanthine, 5 μg/ml insulin, and 200 μM indomethacin (Sigma-Aldrich). Cells were incubated for 14 d and the media were changed every two days.
After differentiation, total RNA was extracted from stable 3T3-L1 cells. Gene expression in samples was analyzed using DNA microarrays in a duplicate manner (each group contained n > 3 single colonies in one experiment). The synthesis of target cDNA probes and hybridization were performed using Agilent’s Low RNA Input Linear Amplification Kit (Agilent Technology, Santa Clara, CA, USA) according to the manufacturer’s instructions. The hybridized images were scanned using Agilent’s DNA microarray scanner and quantified using Feature Extraction Software (Agilent Technology).
All data normalization and selection of fold-changed genes were performed using GeneSpringGX 7.3 (Agilent Technology). The averages of the normalized ratios were calculated by dividing the average of the normalized signal channel intensity by the average of the normalized control channel intensity. An unpaired t-test was used to determine differences in gene expression between the sh-Control and sh- Phf2 groups after adipogenesis. The significant difference between the duplicate samples was assessed based on previous studies , . p < 0.05 and fold change > 1.5 or < − 1.5 were defined as the cut-off criteria. The DEGs that could not be annotated with gene symbols were excluded. Scatter and MA plots were used to compare two unrelated groups.
Genes differentially expressed between the sh- Phf2 nd sh-Control groups with adipogenesis ( p < 0.05) were used to functionally annotate the genes. Cytoscape 3.8.0 software and plug-in ClueGO were employed to annotate the GO functions of genes. The enrichment of the GO annotation function mainly included three aspects: biological processes, KEGG pathways, and wikipathways. GO functional enrichment was selected based on statistically significant differences ( p < 0.05). The circle represents genes that have different expression levels and are enriched in functional pathways. Each color represents a different pathway.
Genes differentially expressed between the sh- Phf2 and sh-Control groups with adipogenic stimuli ( p < 0.05) were plotted using the KEGG database ( https://www.genome.jp/kegg/pathway.html ) and presented by KEGG mapping tools ( https://www.genome.jp/kegg/mapper/color.html ). The found objects are marked in blue or red.
A computational method, GSEA, was used to determine statistically significant differences between the low- and high-PHF2 expression groups. A formatted GCT file was used as the input for GSEA algorithm v2.0 ( http://www.broadinstitute.org/gsea ). A false discovery rate (FDR) q-value of less than 0.25 was considered statistically significant.
Tumor Immune Estimation Resource 2.0 (TIMER2, http://timer.cistrome.org/ ) was used to plot Spearman’s correlation coefficient between PHF2 expression and immune infiltration. The proportions of infiltrating immune cells were inferred using the CIBERSORT or TIMER algorithm.
After isolating total RNAs using TRIzol reagent (Invitrogen, Carlsbad, CA), cDNA was synthesized using an EasyScript cDNA Synthesis Kit (Applied Biological Materials, Richmond, BC, Canada). The cDNAs were amplified with BlasTaq™ 2X qPCR MasterMix reagent (Applied Biological Materials) using a StepOne Real-time PCR System. The sequences of primers are summarized in Supplementary Table .
After washing with phosphate-buffered saline (PBS), cells were fixed with 3.7% formaldehyde in PBS for 10 or 30 min and dehydrated with 60% isopropanol. Cells were stained with 0.5% Oil Red O (O0625, Sigma-Aldrich) for 1 h at room temperature. Next, the cells were washed with 60% isopropanol until the background was clear. For quantification, lipid accumulation in cells was quantified by eluting Oil Red O stain using 100% isopropanol, and optical density (OD) was measured at a wavelength of 500 nm.
After washing with PBS, cells were fixed with 4% paraformaldehyde (PFA) for 10 min at room temperature. After fixation, each well of samples were incubated with Nile red (1 mg/ml) for 20 min at room temperature. The cells were stained with 4′,6-diamidino-2-phenylinodle (DAPI) for 1 min subsequently.
The 5 × 10 5 HepG2 cells were seeded in each oxygen permeable PDMS plate after coated with 4% fluronic for 24 h. The cells were incubated in the plates for 5 days and average diameters were analyzed by Image J. Immunofluorescence staining was performed to the frozen sections of HepG2 3D spheroids. Spheroids collected from each plate were fixed with 4% paraformaldehyde for 30 min at 4 °C and washed three times with PBS. Then the spheroids were immersed into 10, 20, and 30% sucrose for 1 h, respectively. After being embedded into OCT compound (Sakura Finetek, Tokyo, Japan) and stored at − 80 °C for 24 h, the OCT blocks were cut into 10 μm thickness on a glass slide. The sectioned spheroids were washed with PBS. and incubated with 1% BSA solution for 1 h. After the blocking step, they were incubated for overnight with Ki-67 primary antibody (1:200 dilution). After washing three times with 0.1% Tween-20 in PBS solution (PBST), they were incubated with secondary antibody (Alexa Flour 568, anti-rabbit, 1:400 dilution) for 1 h at room temperature. After that, spheroids were washed with PBST three times, and the nuclei were stained with DAPI (1:400 dilution) for 5 min before the immunofluorescence images were observed.
All data were analyzed using Microsoft Excel 2010 (Microsoft), and results are expressed as the mean ± standard deviation (SD). The two-tailed unpaired Student’s t-test was used to compare the means of the two groups. CIBERSORT (DOI: (10.1038/nmeth.3337) and TIMER (10.1186/s13059-016-1028-7)) in the R package were used to evaluate the immune infiltration patterns. In all analyses, * p < 0.05 was taken to indicate statistical significance.
Cells were lysed in a 2× sodium dodecyl sulfate (SDS) sample buffer. The lysate was separated on SDS–polyacrylamide gels and transferred to Immobilon-P membranes (Millipore, Billerica, MA, USA). After cutting the membranes around the size of proteins, they were blocked with a 5% skim milk dissolved in Tris-saline solution containing 0.1% Tween 20 (TBST) for 1 h and incubated with a primary antibody overnight at 4 °C (Mouse monoclonal anti-SREBP1; BD Biosciences 557036 (1:500 dilution), Rabbit polyclonal anti-PHF2; Cell Signaling D45A2 (1:1,000 dilution), Mouse monoclonal anti-Tubulin; Cell Signaling 2146S (1:10,000 dilution)). After brief washing with TBST, the membranes were incubated with a horseradish peroxidase-conjugated secondary antibody for 1 h and visualized using an ECL Plus kit (Thermo Fisher Scientific, Waltham, MA, USA).
Supplementary Information.
|
The effect of educational intervention based on the behavioral reasoning theory on self-management behaviors in type 2 diabetes patients: a randomized controlled trial | 5b00779a-f3cb-460f-ab8b-58ce866e4085 | 11218263 | Patient Education as Topic[mh] | Diabetes is one of the most important problems of the health system all over the world, approximately 90 to 95% of these patients suffer from type 2 diabetes and its prevalence and incidence is increasing all over the world . Every year, more than 7 million people around the world are diagnosed with diabetes and globally, it is the fifth cause of death in most developed countries . It is predicted that the number of people with diabetes in the Middle East region will more than double by 2045 . In terms of the total population of adults with diabetes, Iran ranks third in the Middle East . According to the estimates of the World Health Organization (WHO), if effective measures are not taken to control and prevent this disease number of people with type 2 diabetes in Iran will reach more than 6 million by 2030 and based on the annual growth rate, diabetes in Iran will reach the second place in the Middle East region . Diabetes is a complex disease that requires daily self-management decisions by a person with diabetes . Self-management diabetes is an active and dynamic process that often includes changes in lifestyle, including glucose management, diet, physical activities, stress management, medication adherence, foot care, and blood sugar health monitoring . Diabetes self-management education (DSME) is a vital element in the care of all people with diabetes and is considered necessary to improve patient outcomes , which is a comprehensive combination of clinical, educational, psychosocial and behavioral aspects of care. It addresses the needs for daily self-management and provides a foundation to help all people with diabetes perform daily self-care with confidence and has positive effects on diabetes knowledge, blood glucose control, and behavioral outcomes. It can prevent long-term complications such as eye and kidney complications, nerve involvement, cardiovascular diseases, and premature death . The purpose of teaching self-management behaviors is to provide knowledge, skills and self-confidence to accept the responsibility of self-management to people with diabetes . Self-management in patients with diabetes in terms of diet and exercise, frequent use of medications and blood sugar monitoring plays an important role in reducing diabetes-related complications and premature deaths and improves favorable results . Diet and physical activity play the most important role in controlling and preventing complications in people with diabetes . Self-management by people with diabetes is far less than optimal, as 12% of adults with type 2 diabetes do not follow self-management behaviors such as blood glucose monitoring, diet modification, physical activity at all, 60% to one or two behaviors, and only 28% of them complete self-management behaviors . Education plays a major role in controlling and preventing diabetes complications . Studies have shown that behavioral change educational interventions based on theory are very effective and self-management educational interventions on patients with type 2 diabetes can significantly improve the attitude, knowledge of diabetes and other psychological variables of patients and adherence to medication and improving quality of life, and 90% of studies showed overall improvement DSME based of theory . Behavioral Reasoning Theory (BRT) is a new theory and it can be seen as an improvement in the Theory of Planned Behavioral Control (TPB). BRT is related to several other behavioral theories, but offers different advantages or merits compared to them . Theories such as the theory of planned behavior and the theory of rational action, which are mainly focused on the factors related to the acceptance of behavior and have ignored the resistance of people in implementing or opposing the behavior, have been criticized . This theory has four main constructs of behavioral intention, attitude, reasons (for and against) and values . BRT proposes that reasons serve as important links between beliefs and motivation (e.g., attitudes, subjective norms, and perceived control), intention, and behavior. A basic theoretical assumption in this framework states that reasons influence motivation and intention and provides important empirical links between values, beliefs and reasons (for and against), attitude, and behavioral intention . Figure . During the planning and implementation of this study, the worldwide COVID-19 epidemic made it impossible to conduct face-to-face training. Therefore, a combination of virtual and face-to-face training was used . Research has demonstrated that teaching self-management behaviors through virtual platforms can enhance healthy behaviors and help people control their blood sugar . Considering the high prevalence of diabetes in Iran, especially in Bushehr province, and the importance of self-management behaviors in the management of this disease, and in addition, to the prominent role of theories and models in correcting and improving health-related behaviors, this study aims to investigate the effect of an educational intervention based on the theory of behavioral reasoning on self-management behaviors in patients with type 2 diabetes was conducted for the first time in Iran.
Study design and participants The study was intervention research conducted among 113 patients with type 2 diabetes who were receiving regular care and treatment at the comprehensive health centers of Bushehr in Iran in 2022. The research used a randomized controlled trial with two parallel arms and an equal allocation ratio to evaluate the effectiveness of the intervention. The sample size is based on the study of Hailu et al. (2019) and using the PASS NCSS software - version 15 with a confidence factor of 90% and a test power of 90% (Power = 0.9) and including 10% attrition. 60 people were calculated in each group. To select the samples, we first chose four comprehensive health centers out of the ten available in Bushehr using a simple random method. The random allocation was done at the health center level, with two selected centers assigned to the intervention group (Quds Health Center and Meraj Health Center) and two centers to the control group (Kyber Health Center and Haft Tir Health Center). It’s important to note that the random allocation of comprehensive urban health centers was performed using the Consort checklist before individual recruitment. Next, 30 patients with diabetes who met the study entry criteria and came to receive healthcare were selected by a simple random method from each center. We used sequentially numbered containers to implement the random allocation sequence. Each container was labeled with a unique identification number, and participants were assigned to their respective groups by drawing a container from the set. We took steps to conceal the sequence until interventions were assigned: The person responsible for generating the random allocation sequence and preparing the sequentially numbered containers had no involvement in participant enrollment or assignment. They kept the list of allocations confidential until interventions were assigned. The research team members who designed and conducted the study generated the random allocation sequence. Enrollment of participants was done by healthcare providers at comprehensive health centers based on eligibility criteria determined by researchers. Participants were then assigned to interventions by drawing containers from sequentially numbered sets. Due to practical limitations, blinding of participants and healthcare providers was not feasible in this study. However, we took steps during data collection and analysis to minimize potential bias and the analyst was blinded. In total, there were 60 people in the control group and 60 people in the intervention group. To be eligible for participation in the study, individuals had to meet. specific criteria: a definite diagnosis of type 2 diabetes by a doctor, at least one year since the diagnosis, the ability to read, write, and speak Farsi, possession of a smartphone and proficiency in using WhatsApp, age between 30 and 60 years, and no severe complications caused by diabetes, such as eye disease, kidney disease, and leg/skin ulcers. Exclusion criteria included severe complications of diabetes during the study period, withdrawal from further participation in the study, death, and migration. After obtaining informed consent from participants and explaining the research objectives, both groups completed a pre-test questionnaire. Figure indicates the flow chart of the present research. Measures For this research, two questionnaires were utilized. The first questionnaire comprised of demographic information such as age, gender, education, marital status, occupation, and family history. On the other hand, the second questionnaire was a researcher-made one based on BRT constructs (Knowledge, attitude, intention, perceived behavioral control, subjective norms, reasons for, reasons against, and behavior) in the field of behavior change and implementing self-management behaviors (monitoring blood sugar, physical activities, adherence to medication, periodic examinations) in order to control and prevent short-term complications. The questionnaire used in this study was developed specifically for this research. An English-language version of the questionnaire is included as a supplementary file in the main manuscript. In addition to the behavior questionnaire, fasting blood sugar (FBS) and HbA1C were also measured as they can indicate behavior. It is important to highlight that the primary outcome of this study was to assess the effects of various model constructs, including knowledge, attitude, intention, perceived behavioral control, subjective norms, and reasons for and against behavior change. The secondary outcomes measured were fasting blood sugar (FBS) and HbA1C levels. These measurements were taken at three months and six months following the intervention. HbA1C levels were determined using a biosystem kit and chromatography methods in a laboratory setting. It is worth noting that these biosystem kits are standardized and approved by the Ministry of Health, Treatment, and Medical Education in our country. The questionnaire’s face and content validity were evaluated by a panel of experts and respected professors of health education. The CVI and CVR indices were calculated to ensure their validity. Additionally, the questionnaire’s internal reliability was measured using Cronbach’s alpha, and external reliability was evaluated through a retest method on a pilot sample of at least 30 people. Based on these evaluations, the questionnaire was deemed valid and reliable. The CVR for all constructs was greater than 0.78, indicating that it meets the acceptable criteria based on Lawshe’s standards . Additionally, the CVI for all constructs was greater than 0.86, indicating that it meets the acceptable criteria based on Waltz and Bussel’s standards . In the internal reliability test, Cronbach’s alpha coefficient was 0.7 for attitude, intention, and perceived behavioral control; 0.84 for behavior; 0.91 for reasons for and against, and 0.92 for subjective norms. To measure the external reliability of a study, researchers retested 30 participants after two weeks. The researchers calculated the ICC intraclass correlation index for various constructs, including knowledge (0.94), attitude (0.84), intention (0.87), behavioral control (0.95), subjective norms (0.97), reasons for (0.97), reasons against (0.9), and behavior (0.92). The participants in the study were subjected to three stages of pre-test measurement (when entering the study), three months and six months after the intervention. Procedure People were invited to participate in the study after dividing the samples into control and intervention groups. Then, the study’s objectives were explained to the target group, and after assuring them of the confidentiality of the individuals’ information, informed consent was obtained from them. Next, fasting blood sugar and HbA1c of the patients were measured and in a group meeting, a demographic questionnaire and a researcher-made questionnaire based on the behavioral reasoning theory were completed by both groups. The educational content was adapted according to national standards of diabetes self-care education (DSME) , including changes in lifestyle, glucose management, diet, physical activities, medication adherence, foot care, and blood sugar monitoring. The intervention was developed by a medical doctor who had a PhD degree in Health Education and Health Promotion together with the research team consisting of an MSc student in Health Education and Promotion (first author). An MSc student in Health Education and Promotion (first author) conducted All educational sessions at comprehensive health centers. The two-month training program consisted of eight face-to-face training sessions, held once a week. A group of 60 people was formed for the intervention group in the WhatsApp app. After each session, educational videos, clips, pamphlets, and tips were provided to them. Throughout this period, the control group was provided with the usual training and care within the healthcare system. Three and six months after the end of the educational interventions, two groups completed questionnaires based on BRT constructs. Additionally, periodic examinations were conducted, including fasting blood sugar checks and monitoring of HbA1c levels of the patients. Individuals were also provided with notebooks to record their daily blood sugar levels. The educational intervention was designed and implemented based on BRT for the intervention group and was carried out over eight weeks. For the intervention group, the nutrition consultant and ophthalmologist appointments were arranged based on the people’s needs and the internal specialist’s diagnosis for diet training. In the first session of our educational intervention, our main goal was to improve patients’ understanding of diabetes, its complications, and self-management behaviors. Participants were invited to attend the session, and we explained the objectives of the meeting to them. We emphasized the importance of understanding diabetes for successful self-management and provided a clear definition of diabetes, an overview of its different types (type 1, type 2, gestational), and discussed the role of insulin in managing blood sugar levels. We also covered common symptoms and risk factors associated with diabetes, as well as potential complications that can arise from uncontrolled blood sugar levels, such as heart disease, kidney disease, and nerve damage. We stressed the importance of prevention through proper management and explained the need for regular monitoring. We also provided information on appropriate methods for checking blood sugar levels using a glucometer or continuous glucose monitor (CGM) and shared recommended target ranges for fasting and postprandial glucose levels. Additionally, we highlighted the role of exercise in controlling blood sugar levels and provided examples of suitable exercises that can be incorporated into daily routines based on individual preferences and limitations. We also discussed diet management techniques, including carbohydrate counting and the plate method approach, and offered practical tips on creating a balanced diet plan focusing on whole grains, proteins, vegetables/fruits/fiber-rich foods while limiting processed sugars, saturated fats, and alcohol intake. Finally, the session included a facilitated group discussion where participants were encouraged to share their experiences or concerns regarding self-management behaviors related to diabetes. The second session aimed to recap the previous session and emphasize the advantages of self-management behaviors for diabetes. Participants gained an understanding of how important self-management is in preventing severe complications related to diabetes. The session also focused on changing attitudes towards implementing these behaviors and addressing any obstacles that may hinder their adoption. The duration of the session was 60 min. Examples of potential complications associated with poor management were discussed, and a question-and-answer segment encouraged active participation from participants, allowing them to share their experiences, challenges, and successes in managing diabetes. Additionally, a brainstorming activity was conducted where participants identified common obstacles or barriers to implementing self-management behaviors. Strategies and solutions for overcoming these obstacles were discussed, with participants encouraged to share their own tips and techniques for overcoming challenges. In conclusion, key takeaways from the session highlighting the benefits of self-management behaviors in preventing complications were summarized. Additional resources for further learning were provided as well. It’s important to note that PowerPoint slides and images were used to show the chronic complications associated with diabetes. This was done to raise awareness about the potential consequences of poor management and to evoke emotions in their virtual group. We also showed educational video clips featuring real examples of diabetic foot ulcers, amputations, and vision impairment caused by uncontrolled diabetes. During our third session, we aimed to evaluate our previous meetings, engage in open discussions, and encourage dialogue about the social impact and perspectives of close individuals, doctors, and healthcare workers on self-management behaviors. Our goal was to improve adherence to self-management behaviors by creating a supportive environment where participants could freely express their thoughts and feelings. The session lasted for 45 min, and we followed a structured approach that focused on subjective norms and addressed the concept of social influence with self-management behaviors. We provided examples to illustrate how the opinions of close individuals, doctors, and healthcare workers can influence an individual’s commitment to self-management behaviors. Additionally, we encouraged participants to share their personal experiences with social influence related to their health in the virtual group. The main objective of the fourth educational session was to review the previous session and enhance perceived behavior control by providing a step-by-step guide on self-management. The goal was to equip participants with the necessary techniques for continuous implementation. The session lasted 45 min. A summary of key points discussed in the previous session was presented, followed by an opportunity for participants to ask questions or seek clarification on any concepts or ideas. The concept of gradually and continuously performing self-management behaviors was introduced, highlighting the advantages of breaking down behaviors into smaller, more manageable steps. Practical methods for implementing self-management behaviors in a step-by-step manner were then presented. To encourage participant engagement, interactive discussions, and brainstorming activities were utilized. Additionally, visual aids such as PowerPoints were introduced to enhance understanding and retention of information in the virtual group. Pamphlets or handouts containing relevant information on self-management behaviors were provided as well in the virtual group. In the fifth training session, we focused on improving meeting effectiveness by actively engaging participants in conversations and considering their suggestions. We discussed reasons for resistance to self-management behaviors, identified solutions, and provided practical health management suggestions. Participants were split into small groups to discuss specific topics related to self-management behaviors and present their ideas. We also addressed potential obstacles and objections and encouraged active participation from all attendees. During the sixth session of our educational program, titled “Exploring Solutions to Facilitate Self-Management Behaviors,” our goal was to assist participants in learning strategies to make self-management behaviors easier. This involved teaching methods to facilitate exercise, measure blood sugar, and develop positive attitudes toward implementing these behaviors. The 45-minute session included presentations, Q&A sessions, and a guest speaker who shared their successful experience in managing their disease. To enhance the learning process, we utilized educational aids such as clear pictures depicting successful patients who had effectively implemented self-management behaviors. Additionally, we showed video clips related to our goals during the virtual group. During the seventh session, the focus was on implementing self-management behaviors and helping participants understand the consequences of their actions. The session lasted for 45 min. Participants actively engaged in discussing the impacts of their behavior on self-management. They were encouraged to share personal experiences and insights, while also highlighting the positive outcomes that can be achieved by effectively managing one’s health. Emotional relief exercises were conducted, using techniques like deep breathing and guided visualization, to help participants release any negative emotions or barriers they may have had towards adopting self-management behaviors. A speech was delivered emphasizing the importance of maintaining a positive attitude towards implementing self-management behaviors and highlighting the benefits that stem from this mindset. Relevant video clips showcasing successful patient stories or demonstrating effective techniques related to self-management were shown in the virtual group. The 8th session aimed to review the previous sessions and allow participants to express their emotions and opinions. The main focus was to understand the reasons why self-management behaviors can be either beneficial or challenging. Various techniques were used to promote the adoption of these behaviors, such as question and answer sessions, interactive discussions, brainstorming activities, and visual aids like PowerPoint slides, images, and video clips. The entire session lasted for 45 min. The control group was taught about non-related topics, such as communication skills, time management skills, empathy skills, and self-awareness. After the six-month post-test, the control group received a summary of the teaching points provided to the intervention group. Statistical analysis The data were analyzed using SPSS 26 at a 5% significance level. First, we confirmed the normality of the data using the Kolmogorov-Smirnov test ( P > 0.05). Then, we used frequency descriptive statistics (mean, standard deviation, percentage, and frequency) and chi-square analysis to report and compare the frequency distribution of participants’ demographic characteristics. Next, we conducted repeated measures ANOVA for both primary and secondary outcomes to compare the pre-post intervention within-group means and used an independent t-test for between-group comparisons.
The study was intervention research conducted among 113 patients with type 2 diabetes who were receiving regular care and treatment at the comprehensive health centers of Bushehr in Iran in 2022. The research used a randomized controlled trial with two parallel arms and an equal allocation ratio to evaluate the effectiveness of the intervention. The sample size is based on the study of Hailu et al. (2019) and using the PASS NCSS software - version 15 with a confidence factor of 90% and a test power of 90% (Power = 0.9) and including 10% attrition. 60 people were calculated in each group. To select the samples, we first chose four comprehensive health centers out of the ten available in Bushehr using a simple random method. The random allocation was done at the health center level, with two selected centers assigned to the intervention group (Quds Health Center and Meraj Health Center) and two centers to the control group (Kyber Health Center and Haft Tir Health Center). It’s important to note that the random allocation of comprehensive urban health centers was performed using the Consort checklist before individual recruitment. Next, 30 patients with diabetes who met the study entry criteria and came to receive healthcare were selected by a simple random method from each center. We used sequentially numbered containers to implement the random allocation sequence. Each container was labeled with a unique identification number, and participants were assigned to their respective groups by drawing a container from the set. We took steps to conceal the sequence until interventions were assigned: The person responsible for generating the random allocation sequence and preparing the sequentially numbered containers had no involvement in participant enrollment or assignment. They kept the list of allocations confidential until interventions were assigned. The research team members who designed and conducted the study generated the random allocation sequence. Enrollment of participants was done by healthcare providers at comprehensive health centers based on eligibility criteria determined by researchers. Participants were then assigned to interventions by drawing containers from sequentially numbered sets. Due to practical limitations, blinding of participants and healthcare providers was not feasible in this study. However, we took steps during data collection and analysis to minimize potential bias and the analyst was blinded. In total, there were 60 people in the control group and 60 people in the intervention group. To be eligible for participation in the study, individuals had to meet. specific criteria: a definite diagnosis of type 2 diabetes by a doctor, at least one year since the diagnosis, the ability to read, write, and speak Farsi, possession of a smartphone and proficiency in using WhatsApp, age between 30 and 60 years, and no severe complications caused by diabetes, such as eye disease, kidney disease, and leg/skin ulcers. Exclusion criteria included severe complications of diabetes during the study period, withdrawal from further participation in the study, death, and migration. After obtaining informed consent from participants and explaining the research objectives, both groups completed a pre-test questionnaire. Figure indicates the flow chart of the present research.
For this research, two questionnaires were utilized. The first questionnaire comprised of demographic information such as age, gender, education, marital status, occupation, and family history. On the other hand, the second questionnaire was a researcher-made one based on BRT constructs (Knowledge, attitude, intention, perceived behavioral control, subjective norms, reasons for, reasons against, and behavior) in the field of behavior change and implementing self-management behaviors (monitoring blood sugar, physical activities, adherence to medication, periodic examinations) in order to control and prevent short-term complications. The questionnaire used in this study was developed specifically for this research. An English-language version of the questionnaire is included as a supplementary file in the main manuscript. In addition to the behavior questionnaire, fasting blood sugar (FBS) and HbA1C were also measured as they can indicate behavior. It is important to highlight that the primary outcome of this study was to assess the effects of various model constructs, including knowledge, attitude, intention, perceived behavioral control, subjective norms, and reasons for and against behavior change. The secondary outcomes measured were fasting blood sugar (FBS) and HbA1C levels. These measurements were taken at three months and six months following the intervention. HbA1C levels were determined using a biosystem kit and chromatography methods in a laboratory setting. It is worth noting that these biosystem kits are standardized and approved by the Ministry of Health, Treatment, and Medical Education in our country. The questionnaire’s face and content validity were evaluated by a panel of experts and respected professors of health education. The CVI and CVR indices were calculated to ensure their validity. Additionally, the questionnaire’s internal reliability was measured using Cronbach’s alpha, and external reliability was evaluated through a retest method on a pilot sample of at least 30 people. Based on these evaluations, the questionnaire was deemed valid and reliable. The CVR for all constructs was greater than 0.78, indicating that it meets the acceptable criteria based on Lawshe’s standards . Additionally, the CVI for all constructs was greater than 0.86, indicating that it meets the acceptable criteria based on Waltz and Bussel’s standards . In the internal reliability test, Cronbach’s alpha coefficient was 0.7 for attitude, intention, and perceived behavioral control; 0.84 for behavior; 0.91 for reasons for and against, and 0.92 for subjective norms. To measure the external reliability of a study, researchers retested 30 participants after two weeks. The researchers calculated the ICC intraclass correlation index for various constructs, including knowledge (0.94), attitude (0.84), intention (0.87), behavioral control (0.95), subjective norms (0.97), reasons for (0.97), reasons against (0.9), and behavior (0.92). The participants in the study were subjected to three stages of pre-test measurement (when entering the study), three months and six months after the intervention.
People were invited to participate in the study after dividing the samples into control and intervention groups. Then, the study’s objectives were explained to the target group, and after assuring them of the confidentiality of the individuals’ information, informed consent was obtained from them. Next, fasting blood sugar and HbA1c of the patients were measured and in a group meeting, a demographic questionnaire and a researcher-made questionnaire based on the behavioral reasoning theory were completed by both groups. The educational content was adapted according to national standards of diabetes self-care education (DSME) , including changes in lifestyle, glucose management, diet, physical activities, medication adherence, foot care, and blood sugar monitoring. The intervention was developed by a medical doctor who had a PhD degree in Health Education and Health Promotion together with the research team consisting of an MSc student in Health Education and Promotion (first author). An MSc student in Health Education and Promotion (first author) conducted All educational sessions at comprehensive health centers. The two-month training program consisted of eight face-to-face training sessions, held once a week. A group of 60 people was formed for the intervention group in the WhatsApp app. After each session, educational videos, clips, pamphlets, and tips were provided to them. Throughout this period, the control group was provided with the usual training and care within the healthcare system. Three and six months after the end of the educational interventions, two groups completed questionnaires based on BRT constructs. Additionally, periodic examinations were conducted, including fasting blood sugar checks and monitoring of HbA1c levels of the patients. Individuals were also provided with notebooks to record their daily blood sugar levels. The educational intervention was designed and implemented based on BRT for the intervention group and was carried out over eight weeks. For the intervention group, the nutrition consultant and ophthalmologist appointments were arranged based on the people’s needs and the internal specialist’s diagnosis for diet training. In the first session of our educational intervention, our main goal was to improve patients’ understanding of diabetes, its complications, and self-management behaviors. Participants were invited to attend the session, and we explained the objectives of the meeting to them. We emphasized the importance of understanding diabetes for successful self-management and provided a clear definition of diabetes, an overview of its different types (type 1, type 2, gestational), and discussed the role of insulin in managing blood sugar levels. We also covered common symptoms and risk factors associated with diabetes, as well as potential complications that can arise from uncontrolled blood sugar levels, such as heart disease, kidney disease, and nerve damage. We stressed the importance of prevention through proper management and explained the need for regular monitoring. We also provided information on appropriate methods for checking blood sugar levels using a glucometer or continuous glucose monitor (CGM) and shared recommended target ranges for fasting and postprandial glucose levels. Additionally, we highlighted the role of exercise in controlling blood sugar levels and provided examples of suitable exercises that can be incorporated into daily routines based on individual preferences and limitations. We also discussed diet management techniques, including carbohydrate counting and the plate method approach, and offered practical tips on creating a balanced diet plan focusing on whole grains, proteins, vegetables/fruits/fiber-rich foods while limiting processed sugars, saturated fats, and alcohol intake. Finally, the session included a facilitated group discussion where participants were encouraged to share their experiences or concerns regarding self-management behaviors related to diabetes. The second session aimed to recap the previous session and emphasize the advantages of self-management behaviors for diabetes. Participants gained an understanding of how important self-management is in preventing severe complications related to diabetes. The session also focused on changing attitudes towards implementing these behaviors and addressing any obstacles that may hinder their adoption. The duration of the session was 60 min. Examples of potential complications associated with poor management were discussed, and a question-and-answer segment encouraged active participation from participants, allowing them to share their experiences, challenges, and successes in managing diabetes. Additionally, a brainstorming activity was conducted where participants identified common obstacles or barriers to implementing self-management behaviors. Strategies and solutions for overcoming these obstacles were discussed, with participants encouraged to share their own tips and techniques for overcoming challenges. In conclusion, key takeaways from the session highlighting the benefits of self-management behaviors in preventing complications were summarized. Additional resources for further learning were provided as well. It’s important to note that PowerPoint slides and images were used to show the chronic complications associated with diabetes. This was done to raise awareness about the potential consequences of poor management and to evoke emotions in their virtual group. We also showed educational video clips featuring real examples of diabetic foot ulcers, amputations, and vision impairment caused by uncontrolled diabetes. During our third session, we aimed to evaluate our previous meetings, engage in open discussions, and encourage dialogue about the social impact and perspectives of close individuals, doctors, and healthcare workers on self-management behaviors. Our goal was to improve adherence to self-management behaviors by creating a supportive environment where participants could freely express their thoughts and feelings. The session lasted for 45 min, and we followed a structured approach that focused on subjective norms and addressed the concept of social influence with self-management behaviors. We provided examples to illustrate how the opinions of close individuals, doctors, and healthcare workers can influence an individual’s commitment to self-management behaviors. Additionally, we encouraged participants to share their personal experiences with social influence related to their health in the virtual group. The main objective of the fourth educational session was to review the previous session and enhance perceived behavior control by providing a step-by-step guide on self-management. The goal was to equip participants with the necessary techniques for continuous implementation. The session lasted 45 min. A summary of key points discussed in the previous session was presented, followed by an opportunity for participants to ask questions or seek clarification on any concepts or ideas. The concept of gradually and continuously performing self-management behaviors was introduced, highlighting the advantages of breaking down behaviors into smaller, more manageable steps. Practical methods for implementing self-management behaviors in a step-by-step manner were then presented. To encourage participant engagement, interactive discussions, and brainstorming activities were utilized. Additionally, visual aids such as PowerPoints were introduced to enhance understanding and retention of information in the virtual group. Pamphlets or handouts containing relevant information on self-management behaviors were provided as well in the virtual group. In the fifth training session, we focused on improving meeting effectiveness by actively engaging participants in conversations and considering their suggestions. We discussed reasons for resistance to self-management behaviors, identified solutions, and provided practical health management suggestions. Participants were split into small groups to discuss specific topics related to self-management behaviors and present their ideas. We also addressed potential obstacles and objections and encouraged active participation from all attendees. During the sixth session of our educational program, titled “Exploring Solutions to Facilitate Self-Management Behaviors,” our goal was to assist participants in learning strategies to make self-management behaviors easier. This involved teaching methods to facilitate exercise, measure blood sugar, and develop positive attitudes toward implementing these behaviors. The 45-minute session included presentations, Q&A sessions, and a guest speaker who shared their successful experience in managing their disease. To enhance the learning process, we utilized educational aids such as clear pictures depicting successful patients who had effectively implemented self-management behaviors. Additionally, we showed video clips related to our goals during the virtual group. During the seventh session, the focus was on implementing self-management behaviors and helping participants understand the consequences of their actions. The session lasted for 45 min. Participants actively engaged in discussing the impacts of their behavior on self-management. They were encouraged to share personal experiences and insights, while also highlighting the positive outcomes that can be achieved by effectively managing one’s health. Emotional relief exercises were conducted, using techniques like deep breathing and guided visualization, to help participants release any negative emotions or barriers they may have had towards adopting self-management behaviors. A speech was delivered emphasizing the importance of maintaining a positive attitude towards implementing self-management behaviors and highlighting the benefits that stem from this mindset. Relevant video clips showcasing successful patient stories or demonstrating effective techniques related to self-management were shown in the virtual group. The 8th session aimed to review the previous sessions and allow participants to express their emotions and opinions. The main focus was to understand the reasons why self-management behaviors can be either beneficial or challenging. Various techniques were used to promote the adoption of these behaviors, such as question and answer sessions, interactive discussions, brainstorming activities, and visual aids like PowerPoint slides, images, and video clips. The entire session lasted for 45 min. The control group was taught about non-related topics, such as communication skills, time management skills, empathy skills, and self-awareness. After the six-month post-test, the control group received a summary of the teaching points provided to the intervention group.
The data were analyzed using SPSS 26 at a 5% significance level. First, we confirmed the normality of the data using the Kolmogorov-Smirnov test ( P > 0.05). Then, we used frequency descriptive statistics (mean, standard deviation, percentage, and frequency) and chi-square analysis to report and compare the frequency distribution of participants’ demographic characteristics. Next, we conducted repeated measures ANOVA for both primary and secondary outcomes to compare the pre-post intervention within-group means and used an independent t-test for between-group comparisons.
In this study, there were 120 participants, with 60 people in each group. Out of these, 84 (70.0%) were women and 36 (30.0%) were men. It’s important to note that 3 people from the intervention group and 4 people from the control group were excluded from the study due to personal reasons and not completing the questionnaire. Ultimately, 113 people took part in the educational intervention sessions and completed the 3-month and 6-month post-test questionnaires. Of the 113 participants, 57 were in the intervention group and 56 were in the control group, and they were aged between 30 and 60 years (M = 54.40, SD = 5.83). A total of 97 patients (80.8%) were over 50 years old and 75 patients (62.5%) had a family history of diabetes. Additionally, 80 (66.7%) were smokers, while only 37 (30.8%) had primary education. According to Table , it was found that there is no significant difference between the control and intervention groups in terms of the frequency distribution of demographic variables ( P < 0.05). As is shown in Table , The study found that the mean scores for all BRT constructs (knowledge, attitude, intention, perceived behavioral control, subjective norms, reasons for, reasons against, and behavior) changed significantly over time and between the intervention and control groups. At the start of the study, there was no significant difference in the mean scores for all constructs between the two groups. However, after the educational intervention, the mean scores in the intervention group showed a significant increase in all BRT constructs except for the reasons against self-management. These changes were seen in the second and third time points. In the pre-test, there was no significant difference in the average levels of HbA1C and FBS between the two groups. However, changes were observed in the average level of HbA1C in the second and third time points between the two groups. After the educational intervention, the intervention group showed a significant decrease in the average levels of HbA1C and FBS compared to the control group.
The current study focuses on diabetes self-management education in patients with diabetes and the effect of the DSME program on diabetes knowledge and self-management activities of diabetic patients with the help of the behavioral reason theory. This study showed that diabetic self-management education patients based on behavior reason theory promotes preventive behaviors of complications related to diabetes and improves the control of FBS and HbA1C in patients, and it was able to improve the level of knowledge (awareness), attitude, and performance of patients. It has a positive effect and by increasing the reasons for positive behavior, it was able to improve people’s motivation and intention and their ability to adopt healthy behaviors. The study found that the average knowledge score increased in the intervention group after educational interventions were implemented. This is consistent with the findings of Hosseini et al. . One of the fundamental requirements for behavior change is an increase in knowledge, particularly when it comes to health behaviors . Studies evaluating the knowledge of diabetic patients have confirmed the importance of educating patients about their disease. Insufficient knowledge among patients is ineffective in promoting self-care and preventing diabetes complications . Previous research has shown that increased awareness and knowledge positively affect patients’ ability to manage their disease, self-care, reduce blood sugar levels, and control their diabetes . In a study conducted by Debussche et al., it was found that peer-led diabetes education provided once in every three months for a year to patients with type 2 diabetes did not lead to significant improvement in their diabetes knowledge score . This finding is consistent with the present study’s results. Additionally, in Dan et al.‘s study, a 10-minute media training did not improve patients’ awareness . One possible reason for the failure of the program could be the one-sided and short duration of the training or the lack of acceptance of training and change by people. The results of the study indicate that the intervention group had a higher mean score in their attitude toward self-management behaviors after receiving educational interventions. This suggests that people were more likely to adopt and continue these behaviors when they believed they had positive effects on their health. This finding is consistent with previous research that has shown a positive correlation between attitudes toward self-management behaviors and their adoption . The results of this study are consistent with the findings of Sadeghi et al.‘s study . However, in Khalaf et al.‘s study , people’s attitudes did not change, which could be due to the short duration of the study. Unlike awareness, changing attitudes requires longer interventions. Besides, the study conducted in London by Zwarenstein et al. showed that training doctors by providing educational booklets to improve their attitude towards referring patients to ophthalmology examinations was ineffective. This study contradicts the present study, indicating that providing educational pamphlets should not be the only way to educate and change attitudes, even though it is a low-cost method. The study found that the intervention group experienced a significant increase in their average subjective norms score after receiving educational interventions. As a result, people were better able to understand the importance of social support in adopting healthy behaviors and reducing risky behaviors related to their disease. This indicates that patients are more likely to adopt self-management behaviors when they feel pressure from family members, particularly spouses, doctors, healthcare workers, and friends. Similar findings were reported by Babazadeh et al. and Zindatalab et al. in their studies, which showed that providing educational programs for those who influence patients can improve subjective norms and encourage self-care behaviors in patients with type 2 diabetes. Behavioral reasons “for/against” are among the other constructs of the theory of behavioral reasoning, which, like the construct of obstacles and perceived benefits in the health belief model, can play a vital role in the effects of knowledge on behavior and have a significant effect on The theory of behavioral reasoning includes constructs of “behavioral reasons for/against” that can significantly impact the relationship between knowledge and behavior. Similar to the constructs of “obstacles” and “perceived benefits” in the health belief model, behavioral reasons can affect a person’s attitude and performance, resulting in a behavior change. Studies have shown that after educational interventions, the mean score of favorable behavioral reasons has increased while the score of opposed behavioral reasons has decreased. The reasons for opposing behavior can include physical, psychological, or financial barriers that prevent a person from performing appropriate self-management behaviors. However, the mean score of behavioral reasons against providing appropriate solutions decreased significantly after educational interventions. By increasing the reasons for favorable behavior after the implementation of educational interventions, it is possible to increase the motivation and willingness of patients to implement self-management behaviors and adhere to them. Our study results demonstrated that the intervention group’s average intention score increased post the implementation of educational interventions. These interventions can play a crucial role in enhancing diabetes self-management behaviors by increasing people’s intentions. Higher intention levels can lead to faster behavior implementation, and our study’s findings are consistent with a similar study by Damayanti and colleagues . However, our results do not support Robin et al.‘s study, which suggests that only 50% of intention can influence behavior implementation . Compared to the mean score of self-management behavior, three and six months after the intervention, there was a significant increase in the intervention group. Based on the results of the present study, increasing awareness as well as other constructs of the behavioral reason theory, including attitude, behavioral intention, and perceived behavioral control, all led to an increase in the skills and performance of individuals and improved self-management behaviors of the intervention group. The results of our study demonstrated that the educational intervention based on behavioral reasoning theory resulted in significant improvements in HbA1C levels in the intervention group compared to the control group. Lowering HbA1C levels helps reduce the risk of developing diabetes-related complications such as cardiovascular diseases, kidney problems, and nerve damage . The HbA1C test is valuable in diagnosing diabetes and assessing glycemic control. It provides accurate results and is easy to administer, making it particularly useful in low- and middle-income countries. This information can guide treatment decisions and help prevent uncontrolled blood sugar-related complications . Recent studies have shown that Diabetes Self-Management Education (DSME) leads to a moderate decrease in HbA1c compared to usual care for people with type 2 diabetes, regardless of the treatment method used . Other studies have also demonstrated that education can improve self-care variables, fasting blood sugar control, and HbA1c levels . A study conducted by Zheng et al. showed that self-management training of diabetic patients improved the fasting blood sugar and HbA1c results in the intervention group, which supports the findings of the present study. Therefore, healthcare workers are recommended to teach diabetic patients how to implement self-management behaviors to achieve better blood sugar control. Previous studies have demonstrated that educating and intervening with type 2 diabetic patients can effectively improve their performance and behavior, which is consistent with the results of the current study . This highlights the importance of targeted education and support to empower individuals with type 2 diabetes to manage their condition effectively. Overall, these results reinforce the importance of comprehensive diabetes self-management education programs incorporating behavioral reasoning theory principles for improving blood sugar control outcomes. They emphasize that providing individuals with knowledge, skills, attitudes, intentions, and support can positively change their self-management behaviors and ultimately result in better glycemic control over time.
No study has been conducted on this topic, and the present study is the first to research this area, which is one of the strengths of this study. It is recommended to utilize this theory and focus on the behavioral reasons for “yes/no” in the self-management education of diabetic patients and other chronic diseases in the future. In this research, the subjects in the control and intervention groups were selected through random sampling, and the research lasted for six months. It is also suggested to evaluate this educational method in patients with type 1 diabetes. Due to the limited sample in this study, we did not compare self-management behaviors, blood glucose control, and educational intervention effects in different types of antidiabetic treatment (such as insulin injections versus oral medications). The study was conducted in a specific population of adults with Type 2 diabetes in Bushehr, Iran. Therefore, the findings may not apply to other populations with different demographic characteristics or healthcare systems. Furthermore, this study measured diabetes control indicators such as Fasting Plasma Sugar (FBS) or HbA1c levels. These measurements are important for evaluating overall diabetes management and could provide additional insights into how self-management interventions impact diabetes control outcomes. Since subjects’ willingness was considered to participate in the study, the selection bias was not avoidable and these results cannot be generalized to all diabetes patients.
Diabetes self-management education, with the help of behavioral reasoning theory, can effectively improve the level of self-management performance in patients with type 2 diabetes. This can be achieved by creating a positive attitude and improving the mental norms of patients towards the implementation of self-management behaviors, which in turn leads to better blood sugar control. This educational program can serve as a useful model for promoting health-related behaviors. The findings of this study have significant implications for healthcare practice, policy development, and future research in the field of diabetes self-management education. In terms of practice, the results highlight the effectiveness of the DSME program based on behavioral reason theory in enhancing diabetes knowledge and self-management behaviors among patients. Therefore, healthcare providers should consider integrating this educational intervention into their standard care for individuals with diabetes to improve disease management and control. From a policy perspective, it is crucial to invest in comprehensive diabetes education programs that go beyond knowledge acquisition and encompass attitudes, intentions, and perceived behavioral control. Policymakers should prioritize funding for these programs to ensure that individuals with diabetes receive adequate support and resources to effectively manage their condition. By allocating resources towards training healthcare professionals on delivering the DSME program or establishing partnerships with community organizations, policymakers can reach underserved populations more effectively. In terms of future research directions, conducting long-term follow-up studies would be valuable to assess the long-lasting impact of the intervention on knowledge retention, self-management behaviors, and clinical outcomes. This will provide insights into whether behavior change is sustained over time among participants who received the DSME program. Additionally, exploring alternative delivery methods such as online platforms or mobile applications could enhance accessibility and enable reaching a wider range of individuals who may face barriers to accessing traditional face-to-face education. Overall, this study underscores the importance of implementing effective educational interventions based on behavioral reason theory in both clinical practice and policy initiatives targeting individuals with type 2 diabetes. By addressing key factors such as knowledge enhancement along with attitudes, intentions, and perceived behavioral control through comprehensive education programs, promoting better disease management outcomes becomes possible for individuals living with diabetes.
Below is the link to the electronic supplementary material. Supplementary Material 1 Supplementary Material 2
|
Tristetraprolin Prevents Gastric Metaplasia in Mice by Suppressing Pathogenic Inflammation | 456598fa-af7a-413b-83ca-cca5310cce5a | 8554534 | Anatomy[mh] | TTP Suppresses Adrenalectomy-Induced Gastric Inflammation Gastric inflammation is associated with the development of gastritis, oxyntic atrophy, and metaplasia. TTP enhances the turnover of numerous proinflammatory mRNAs such as those encoding TNF. , We hypothesized that enhanced systemic TTP expression could protect mice from gastric inflammation and metaplasia. To test this hypothesis, we used TTPΔARE mice in which a 136-base AU-rich instability region was deleted from the 3’ UTR of the gene encoding TTP, Zfp36 . As previously reported in other tissues, we confirmed that germline deletion of the ARE region results in the accumulation of TTP mRNA in the mouse gastric fundus at 2 months of age ( A ). We previously showed that adrenalectomy (ADX) rapidly induces spontaneous gastric inflammation and SPEM. We used bilateral ADX to assess gastric inflammation and SPEM development in TTPΔARE mice ( B ). As expected, wild-type (WT) control mice showed prominent inflammation within the gastric corpus 2 months after ADX ( C ). In contrast, both TTPΔARE heterozygous and homozygous mice were protected from increased inflammation. We previously have shown that ADX-induced gastric inflammation is composed predominately of macrophages and eosinophils. , Analysis of the WT mice showed 4.7-fold and 28-fold increases in gastric macrophages and eosinophils 2 months after ADX, respectively ( D ). In contrast, neither TTP heterozygous mice nor homozygous mice showed a significant increase in inflammatory cells. These data indicate that increased systemic TTP expression from normally regulated Zfp36 can protect the stomach from ADX-induced chronic inflammation. TTP Protects Mice From SPEM Development SPEM develops in response to glandular damage such as oxyntic atrophy and is a putative precursor of gastric adenocarcinoma. Inflammation potently induces SPEM development. Because TTPΔARE mice were resistant to ADX-induced inflammation, we asked whether increased TTP expression could prevent the development of oxyntic atrophy and metaplasia. The gross morphology of sham-operated TTPΔARE heterozygous and homozygous mice was indistinguishable from sham-operated WT mice ( A ), and there were no significant differences in the number of parietal cells or chief cells ( B ). Two months after ADX, WT mice had lost 82% of their parietal cell population and 99% of their mature chief cells ( ). Moreover, WT mice showed prominent mucous cell hyperplasia within the gastric corpus, identified by an increase in Griffonia simplicifolia (GSII) lectin staining, which binds to mucin 6. In contrast to ADX-WT mice, neither TTPΔARE heterozygous nor homozygous mice showed a significant change in their parietal and chief cell populations, and both genotypes had normal gastric morphology 2 months after ADX ( ). Oxyntic atrophy, loss of the mature chief cell marker BHLHA15 (also known as MIST1), and expansion of GSII+ cells are among the defining characteristics of SPEM. We confirmed SPEM development by immunostaining for the de novo SPEM marker CD44v9, a splice variant of CD44. Although there was widespread staining of CD44v9 in ADX WT mice, CD44v9 was not detected within the gastric glands of ADX TTPΔARE mice ( A ). Re-entry into the cell cycle accompanies chief cell transdifferentiation. , We performed co-immunofluorescence for Ki67 and β-catenin (CTNNB1) to identify proliferative epithelial cells. In sham mice, proliferation was restricted to the gland isthmus, which is widely regarded as the stem cell compartment within the gastric corpus ( B ). In contrast, 2 months after ADX, WT mice showed numerous Ki67+ cells throughout the neck and base. However, proliferation remained unchanged 2 months after ADX in TTPΔARE heterozygous and homozygous mice. In addition, we performed quantitative reverse-transcription polymerase chain reaction (qRT-PCR) on a panel of transcripts from the advanced SPEM-associated genes Cftr , Wfdc2 , and Olfm4 . , Consistent with the increase in CD44v9 staining, there was significant induction of all 3 SPEM markers in ADX WT mice ( C ). However, these transcripts did not significantly increase in TTPΔARE homozygous mice. These results show that increased TTP expression protected the mice from oxyntic atrophy and SPEM development. TTP Suppresses the Induction of Proinflammatory Gene mRNAs After ADX Because TTPΔARE mice were protected from ADX-induced gastric inflammation and SPEM, we next used RNA sequencing (RNAseq) to examine their gastric transcriptomes 5 days after ADX ( B ). We used this early time after ADX to avoid secondary changes caused by the anatomic alterations seen in long-term ADX mice. Moreover, there was limited gastric inflammation 5 days after ADX, as shown by a modest increase in the pan-immune cell marker Ptprc (CD45) and the pan macrophage marker Cd68 ( A ). RNAseq showed significant increases in inflammatory gene expression 5 days after ADX in WT mice. Gene set enrichment analysis (GSEA) comparing sham WT vs ADX WT groups showed significant enrichment of mRNAs associated with the Gene Ontology (GO) inflammatory response pathway ( B ). Surprisingly, there also was significant enrichment of inflammatory genes 5 days after ADX in TTPΔARE homozygous mice. However, the normalized enrichment score was 6.36 in the WT group compared with 5.02 in the TTPΔARE group, suggesting moderately increased inflammation within the WT group. Moreover, a comparison of the ADX WT group with the ADX TTPΔARE group showed greater activation of inflammatory response pathways in ADX WT mice ( B ). Next, we ranked the GSEA data and found that the GO innate immune response pathway was the seventh highest activated pathway in the WT group (normalized enrichment score, 5.32) ( C ). In contrast, this pathway was ranked 46th in the TTPΔARE group (normalized enrichment score, 3.97). Comparison of the WT ADX group with the TTPΔARE ADX group showed significant positive enrichment ( C ), suggesting increased innate immune system activation in WT ADX mice. Macrophages have been shown previously to be required to induce SPEM development. , Therefore, we next analyzed the differentially expressed gene (DEG) lists using Ingenuity Pathway Analysis (IPA) to assess transcripts associated with macrophage activation. IPA predicted significant activation of the "Activation of Macrophages" pathway in ADX WT mice (activation z-score, 2.43) ( D ). However, this pathway was not significantly activated in ADX TTPΔARE mice. Importantly, GSEA showed that pathways associated with adaptive immunity, such as the GO adaptive immune response ( E ) and gene GO lymphocyte activation ( F ), were activated equivalently in both WT and TTPΔARE mice. These results are consistent with published reports that mature lymphocytes are dispensable for inducing SPEM development. Transcripts Containing AREs Are Only a Small Portion of the ADX-Induced Genes TTP is an RNA binding protein that binds to adenylate-uridylate–rich target sequences in mRNAs before promoting the turnover of those mRNAs. RNAseq showed 760 DEGs between the sham WT and ADX WT groups. In contrast, there were only 490 DEGs between the sham TTPΔARE mice and ADX TTPΔARE groups ( A ). Of the DEGs, 189 genes were regulated in both groups. We screened the transcripts that were up-regulated by ADX in the WT group for the presence of ideal TTP binding sequences (UAUUUAU and UAUUUUAU). We identified 94 mRNAs that contained a potential TTP binding motif ( B ). Up-regulation of 93 of these transcripts was blunted significantly in ADX TTPΔARE mice, indicating that TTP may enhance the degradation of these transcripts. Importantly, there were established TTP targets among the 94 ARE-containing transcripts, such as the mRNA encoding Tnf , and inflammatory genes associated with SPEM development, including Il13 . Il13 is potently induced by the alarmin IL33. Interestingly, we found that Il33 expression was increased significantly only in TTPΔARE mice 5 days after ADX ( C ). Consistent with this increase, we did not identify an ARE within the Il33 transcript, suggesting it may not be a direct TTP target. In contrast, Il13 , which does contain potential TTP binding sites, was blunted significantly in ADX TTPΔARE mice; thus, TTP suppression of Il13 may disrupt macrophage activation. Together, these data show that TTP directly regulates numerous proinflammatory genes within the stomach. Tnf Knockout Mice Are Partially Protected From ADX-Induced SPEM TNF-α is a prominent proinflammatory cytokine produced by macrophages and other leukocytes. Aberrant TNF production is associated with inflammatory disease within the gastrointestinal tract, and may increase the risk of gastric cancer. , Moreover, Tnf mRNA is an established TTP target, and germline Zfp36 KO mice have systemic inflammation attributed in part to excessive TNF. , We hypothesized that suppression of Tnf in TTPΔARE mice may protect against ADX-induced inflammation and metaplasia. Therefore, we adrenalectomized Tnf KO mice and assessed their stomachs 2 months after surgery. Interestingly, Tnf KO mice showed only intermediate protection from SPEM ( A and B ). In ADX Tnf KO mice, there were regions of the gastric corpus that appeared identical to sham controls, with the normal complement of parietal and chief cells, and that were negative for the SPEM marker CD44v9 ( A ). In contrast, other regions of the lesser curvature appeared identical to sections from the ADX WT mice ( A , far right panel). We quantitated the number of parietal and chief cells present in both normal and SPEM regions. Quantitation showed that although ADX Tnf KO mice showed a significant loss of parietal and chief cells relative to sham controls, these effects were diminished significantly compared with ADX WT mice ( B ). In addition to stomach inflammation, ADX WT mice developed splenomegaly ( C and D ), a classic feature of ADX in rodents. However, TTPΔARE homozygous spleen weights did not differ significantly from WT mice 2 months after ADX. Surprisingly, Tnf KO completely rescued the splenomegaly observed in ADX WT mice. Together, these data indicate that although Tnf contributes to SPEM development, there likely are redundant mechanisms that control pathogenic gastric inflammation. Moreover, these results show that TTP’s protective effects in the stomach are the result of broad anti-inflammatory effects beyond the suppression of Tnf . TTP Does Not Prevent High-Dose-Tamoxifen–Induced SPEM Development SPEM development occurs in response to glandular damage within the gastric corpus. Adrenalectomy induces SPEM development by triggering massive gastric inflammation. , Our results show that TTP overexpression suppresses ADX-induced gastric inflammation. We hypothesized that TTP protected from SPEM by regulating gastric inflammation. To test this hypothesis, we used the high-dose tamoxifen (HDT) model. HDT induces chief cell transdifferentiation toward the SPEM lineage by killing parietal cells and is largely noninflammatory. , WT and TTPΔARE homozygous mice were treated with HDT 3 times over 72 hours, and stomachs were collected 24 hours after the final dose. There were no morphologic differences between the stomachs of vehicle-treated WT and TTPΔARE mice ( A ). As expected, HDT treatment induced nearly complete oxyntic atrophy in both genotypes. Importantly, loss of mature chief cells, denoted by loss of MIST1 staining ( A ) and Gif mRNAs ( B ) was equivalent in both HDT-treated WT and TTPΔARE mice. Moreover, there was concurrent induction of the SPEM markers CD44v9 as well as Cftr mRNAs. These results show that TTP overexpression does not directly inhibit SPEM development and suggests that SPEM protection occurs through inhibition of the intensity and type of inflammation.
Gastric inflammation is associated with the development of gastritis, oxyntic atrophy, and metaplasia. TTP enhances the turnover of numerous proinflammatory mRNAs such as those encoding TNF. , We hypothesized that enhanced systemic TTP expression could protect mice from gastric inflammation and metaplasia. To test this hypothesis, we used TTPΔARE mice in which a 136-base AU-rich instability region was deleted from the 3’ UTR of the gene encoding TTP, Zfp36 . As previously reported in other tissues, we confirmed that germline deletion of the ARE region results in the accumulation of TTP mRNA in the mouse gastric fundus at 2 months of age ( A ). We previously showed that adrenalectomy (ADX) rapidly induces spontaneous gastric inflammation and SPEM. We used bilateral ADX to assess gastric inflammation and SPEM development in TTPΔARE mice ( B ). As expected, wild-type (WT) control mice showed prominent inflammation within the gastric corpus 2 months after ADX ( C ). In contrast, both TTPΔARE heterozygous and homozygous mice were protected from increased inflammation. We previously have shown that ADX-induced gastric inflammation is composed predominately of macrophages and eosinophils. , Analysis of the WT mice showed 4.7-fold and 28-fold increases in gastric macrophages and eosinophils 2 months after ADX, respectively ( D ). In contrast, neither TTP heterozygous mice nor homozygous mice showed a significant increase in inflammatory cells. These data indicate that increased systemic TTP expression from normally regulated Zfp36 can protect the stomach from ADX-induced chronic inflammation.
SPEM develops in response to glandular damage such as oxyntic atrophy and is a putative precursor of gastric adenocarcinoma. Inflammation potently induces SPEM development. Because TTPΔARE mice were resistant to ADX-induced inflammation, we asked whether increased TTP expression could prevent the development of oxyntic atrophy and metaplasia. The gross morphology of sham-operated TTPΔARE heterozygous and homozygous mice was indistinguishable from sham-operated WT mice ( A ), and there were no significant differences in the number of parietal cells or chief cells ( B ). Two months after ADX, WT mice had lost 82% of their parietal cell population and 99% of their mature chief cells ( ). Moreover, WT mice showed prominent mucous cell hyperplasia within the gastric corpus, identified by an increase in Griffonia simplicifolia (GSII) lectin staining, which binds to mucin 6. In contrast to ADX-WT mice, neither TTPΔARE heterozygous nor homozygous mice showed a significant change in their parietal and chief cell populations, and both genotypes had normal gastric morphology 2 months after ADX ( ). Oxyntic atrophy, loss of the mature chief cell marker BHLHA15 (also known as MIST1), and expansion of GSII+ cells are among the defining characteristics of SPEM. We confirmed SPEM development by immunostaining for the de novo SPEM marker CD44v9, a splice variant of CD44. Although there was widespread staining of CD44v9 in ADX WT mice, CD44v9 was not detected within the gastric glands of ADX TTPΔARE mice ( A ). Re-entry into the cell cycle accompanies chief cell transdifferentiation. , We performed co-immunofluorescence for Ki67 and β-catenin (CTNNB1) to identify proliferative epithelial cells. In sham mice, proliferation was restricted to the gland isthmus, which is widely regarded as the stem cell compartment within the gastric corpus ( B ). In contrast, 2 months after ADX, WT mice showed numerous Ki67+ cells throughout the neck and base. However, proliferation remained unchanged 2 months after ADX in TTPΔARE heterozygous and homozygous mice. In addition, we performed quantitative reverse-transcription polymerase chain reaction (qRT-PCR) on a panel of transcripts from the advanced SPEM-associated genes Cftr , Wfdc2 , and Olfm4 . , Consistent with the increase in CD44v9 staining, there was significant induction of all 3 SPEM markers in ADX WT mice ( C ). However, these transcripts did not significantly increase in TTPΔARE homozygous mice. These results show that increased TTP expression protected the mice from oxyntic atrophy and SPEM development.
Because TTPΔARE mice were protected from ADX-induced gastric inflammation and SPEM, we next used RNA sequencing (RNAseq) to examine their gastric transcriptomes 5 days after ADX ( B ). We used this early time after ADX to avoid secondary changes caused by the anatomic alterations seen in long-term ADX mice. Moreover, there was limited gastric inflammation 5 days after ADX, as shown by a modest increase in the pan-immune cell marker Ptprc (CD45) and the pan macrophage marker Cd68 ( A ). RNAseq showed significant increases in inflammatory gene expression 5 days after ADX in WT mice. Gene set enrichment analysis (GSEA) comparing sham WT vs ADX WT groups showed significant enrichment of mRNAs associated with the Gene Ontology (GO) inflammatory response pathway ( B ). Surprisingly, there also was significant enrichment of inflammatory genes 5 days after ADX in TTPΔARE homozygous mice. However, the normalized enrichment score was 6.36 in the WT group compared with 5.02 in the TTPΔARE group, suggesting moderately increased inflammation within the WT group. Moreover, a comparison of the ADX WT group with the ADX TTPΔARE group showed greater activation of inflammatory response pathways in ADX WT mice ( B ). Next, we ranked the GSEA data and found that the GO innate immune response pathway was the seventh highest activated pathway in the WT group (normalized enrichment score, 5.32) ( C ). In contrast, this pathway was ranked 46th in the TTPΔARE group (normalized enrichment score, 3.97). Comparison of the WT ADX group with the TTPΔARE ADX group showed significant positive enrichment ( C ), suggesting increased innate immune system activation in WT ADX mice. Macrophages have been shown previously to be required to induce SPEM development. , Therefore, we next analyzed the differentially expressed gene (DEG) lists using Ingenuity Pathway Analysis (IPA) to assess transcripts associated with macrophage activation. IPA predicted significant activation of the "Activation of Macrophages" pathway in ADX WT mice (activation z-score, 2.43) ( D ). However, this pathway was not significantly activated in ADX TTPΔARE mice. Importantly, GSEA showed that pathways associated with adaptive immunity, such as the GO adaptive immune response ( E ) and gene GO lymphocyte activation ( F ), were activated equivalently in both WT and TTPΔARE mice. These results are consistent with published reports that mature lymphocytes are dispensable for inducing SPEM development.
TTP is an RNA binding protein that binds to adenylate-uridylate–rich target sequences in mRNAs before promoting the turnover of those mRNAs. RNAseq showed 760 DEGs between the sham WT and ADX WT groups. In contrast, there were only 490 DEGs between the sham TTPΔARE mice and ADX TTPΔARE groups ( A ). Of the DEGs, 189 genes were regulated in both groups. We screened the transcripts that were up-regulated by ADX in the WT group for the presence of ideal TTP binding sequences (UAUUUAU and UAUUUUAU). We identified 94 mRNAs that contained a potential TTP binding motif ( B ). Up-regulation of 93 of these transcripts was blunted significantly in ADX TTPΔARE mice, indicating that TTP may enhance the degradation of these transcripts. Importantly, there were established TTP targets among the 94 ARE-containing transcripts, such as the mRNA encoding Tnf , and inflammatory genes associated with SPEM development, including Il13 . Il13 is potently induced by the alarmin IL33. Interestingly, we found that Il33 expression was increased significantly only in TTPΔARE mice 5 days after ADX ( C ). Consistent with this increase, we did not identify an ARE within the Il33 transcript, suggesting it may not be a direct TTP target. In contrast, Il13 , which does contain potential TTP binding sites, was blunted significantly in ADX TTPΔARE mice; thus, TTP suppression of Il13 may disrupt macrophage activation. Together, these data show that TTP directly regulates numerous proinflammatory genes within the stomach.
TNF-α is a prominent proinflammatory cytokine produced by macrophages and other leukocytes. Aberrant TNF production is associated with inflammatory disease within the gastrointestinal tract, and may increase the risk of gastric cancer. , Moreover, Tnf mRNA is an established TTP target, and germline Zfp36 KO mice have systemic inflammation attributed in part to excessive TNF. , We hypothesized that suppression of Tnf in TTPΔARE mice may protect against ADX-induced inflammation and metaplasia. Therefore, we adrenalectomized Tnf KO mice and assessed their stomachs 2 months after surgery. Interestingly, Tnf KO mice showed only intermediate protection from SPEM ( A and B ). In ADX Tnf KO mice, there were regions of the gastric corpus that appeared identical to sham controls, with the normal complement of parietal and chief cells, and that were negative for the SPEM marker CD44v9 ( A ). In contrast, other regions of the lesser curvature appeared identical to sections from the ADX WT mice ( A , far right panel). We quantitated the number of parietal and chief cells present in both normal and SPEM regions. Quantitation showed that although ADX Tnf KO mice showed a significant loss of parietal and chief cells relative to sham controls, these effects were diminished significantly compared with ADX WT mice ( B ). In addition to stomach inflammation, ADX WT mice developed splenomegaly ( C and D ), a classic feature of ADX in rodents. However, TTPΔARE homozygous spleen weights did not differ significantly from WT mice 2 months after ADX. Surprisingly, Tnf KO completely rescued the splenomegaly observed in ADX WT mice. Together, these data indicate that although Tnf contributes to SPEM development, there likely are redundant mechanisms that control pathogenic gastric inflammation. Moreover, these results show that TTP’s protective effects in the stomach are the result of broad anti-inflammatory effects beyond the suppression of Tnf .
SPEM development occurs in response to glandular damage within the gastric corpus. Adrenalectomy induces SPEM development by triggering massive gastric inflammation. , Our results show that TTP overexpression suppresses ADX-induced gastric inflammation. We hypothesized that TTP protected from SPEM by regulating gastric inflammation. To test this hypothesis, we used the high-dose tamoxifen (HDT) model. HDT induces chief cell transdifferentiation toward the SPEM lineage by killing parietal cells and is largely noninflammatory. , WT and TTPΔARE homozygous mice were treated with HDT 3 times over 72 hours, and stomachs were collected 24 hours after the final dose. There were no morphologic differences between the stomachs of vehicle-treated WT and TTPΔARE mice ( A ). As expected, HDT treatment induced nearly complete oxyntic atrophy in both genotypes. Importantly, loss of mature chief cells, denoted by loss of MIST1 staining ( A ) and Gif mRNAs ( B ) was equivalent in both HDT-treated WT and TTPΔARE mice. Moreover, there was concurrent induction of the SPEM markers CD44v9 as well as Cftr mRNAs. These results show that TTP overexpression does not directly inhibit SPEM development and suggests that SPEM protection occurs through inhibition of the intensity and type of inflammation.
Post-transcriptional regulation of gene expression by RNA binding proteins is critical for maintaining cellular and tissue homeostasis. Dysregulation of RNA binding proteins is associated with a host of diseases including cancer. Zfp36 encodes a zinc finger RNA binding protein, TTP, that binds to ARE-containing mRNAs and destabilizes them by recruiting deadenylases, thus promoting mRNA decay. It has been estimated that approximately 26% of human mRNA 3’ UTRs contain at least a single minimal TTP family binding site, UAUUUAU or UAUUUUAU, and disruption of TTP family members has been associated with inflammatory disorders and cancer. , , , TTP is a critical regulator of numerous proinflammatory cytokines. TTP KO mice develop multisystem inflammatory disease that is largely caused by excessive TNF expression. , , In contrast, increased TTP expression confers resistance to numerous inflammatory pathologies including arthritis and dermatitis. , , Here, we report that knockin mice that have regulated increases in TTP levels throughout the body are protected from ADX-induced gastric inflammation and SPEM. Our results suggest that TTP could be a master regulator of gastric inflammation, and therapies that lead to increased TTP protein levels may be effective at treating gastric inflammation. Chronic inflammation is strongly associated with gastric cancer development. Within the stomach, inflammation induces a well-defined histopathologic progression in which stomach damage leads to gastric atrophy, metaplasia, dysplasia, and adenocarcinoma. SPEM is a potentially preneoplastic form of metaplasia that develops in response to damage within the gastric corpus that also may serve as a healing mechanism. , However, in the setting of prolonged damage, such as during chronic inflammation, SPEM becomes increasingly proliferative and eventually may progress toward carcinogenesis. We found that TTP overexpression protected mice from gastric inflammation and SPEM development. We used ADX as a model to challenge the TTPΔARE mice. In WT mice, ADX triggered massive spontaneous inflammation of the gastric corpus followed by SPEM development. Both homozygous and heterozygous TTPΔARE mice were completely protected from ADX-induced gross inflammation and SPEM development. We previously reported that suppressing gastric inflammation by depleting macrophages in ADX WT mice protects them from SPEM development. Thus, it is likely that TTP prevents SPEM development by suppressing inflammation. Importantly, we found that TTP overexpression did not affect HDT-induced SPEM development. These results suggest that TTP does not directly inhibit SPEM development and that protection from SPEM in the ADX model likely occurs by inhibiting inflammation. Our results suggest that therapies that elicit even a modest increase in TTP expression may effectively control gastric inflammation. TTP primarily functions by binding to specific AREs within the 3’ UTR of target mRNAs, eventually promoting the degradation of the mRNA. Our RNAseq studies showed that TTP potently suppressed genes associated with macrophage activation in ADX mice. Importantly, TTP regulates the expression of IL13 and TNF⍺, cytokines that have been implicated in inducing SPEM development. , Our RNAseq data showed that 33% of DEG transcripts in ADX WT mice contained potential TTP binding sites, including Tnf and Il13 . TTP regulation of Il13 may be an important mechanism protecting from SPEM. Within the stomach, Il13 is potently expressed by type 2 innate lymphoid cells. In response to gastric epithelial damage, Il13 is induced by IL33, which is released from the surface epithelial cells. IL13 drives alternative macrophage activation, which in turn drives SPEM development. , Several recent studies have reported that IL33 is a critical mediator of SPEM development, and Il33 KO mice are resistant to experimental SPEM models. , , , Interestingly, Il33 induction was greater in ADX TTPΔARE mice than in WT mice, and our analysis did not identify any TTP binding sites within the Il33 gene, suggesting that TPP may not directly regulate Il33 expression. Thus, TTP suppression of Il13 may be important for disrupting macrophage activation and protecting from SPEM development. However, given that TTP can regulate other cellular pathways, including those involving nuclear factor-κB, , it is likely that TTP can indirectly regulate the expression of additional inflammatory genes within the stomach. Surprisingly, despite the almost complete suppression of inflammatory infiltrates into the stomachs of ADX TTPΔARE mice, we found striking up-regulation of numerous inflammatory transcripts and pathways. Increased TTP specifically suppressed the innate immune response, while pathways associated with the adaptive immune response were not affected significantly. It has been postulated previously that TTP preferentially regulates the innate immune response. However, although myeloid-specific TTP KO mice have an abnormal inflammatory response when challenged with lipopolysaccharide, they do not phenocopy the spontaneous inflammatory pathologies that develop in the whole-body TTP KO. Several studies have found that lymphocytes are dispensable for inducing SPEM development. , , Thus, even if TTP primarily suppresses the innate immune system, ADX-induced lymphocyte activation may be inconsequential for SPEM development. Aberrant TNF production is associated with numerous inflammatory pathologies of the gastrointestinal tract. H pylori infection potently induces TNF production, and Tnf KO mice are protected from SPEM development in some mouse models. , Thus, TNF may contribute to gastric carcinogenesis. Tnf mRNA is a well-known TTP target, and the numerous inflammatory pathologies that develop in TTP KO mice were rescued by treatment with TNF neutralizing antibodies , or by breeding to TNF-receptor–deficient mice. Although macrophages produce large amounts of TNF and are critical for driving SPEM development, we hypothesized that TTP suppression of Tnf was the underlying mechanism by which TTPΔARE mice were protected from ADX-induced gastric inflammation. Surprisingly, we found that Tnf KO mice were at least partially susceptible to ADX-induced gastric inflammation and metaplasia. Interestingly, Tnf KO mice did not develop ADX-induced splenomegaly. These results show tissue-specific roles for TTP in regulating inflammation, and suggest that TTP’s anti-inflammatory role in the stomach is more complex than the suppression of a single proinflammatory cytokine. Regulation of inflammation is multifaceted, occurring at the transcriptional level, post-transcriptional level, and beyond. We previously have shown that glucocorticoids are critical transcriptional regulators of gastric inflammation. Here, we report that increased expression of the RNA binding protein TTP protects mice from gastric inflammation and metaplasia. Importantly, TTP transcription is induced by glucocorticoids. TTP may be a key effector molecule by which glucocorticoids regulate the gastric inflammatory response and may be a useful therapeutic target for treating gastric inflammatory disease. Recent reports have found that TTP expression is decreased in gastric cancer samples. Thus, there is a need for continued study into the role of TTP in suppressing gastric inflammation and carcinogenesis.
Animal Care and Treatment All mouse studies were performed with approval by the National Institute of Environmental Health Sciences Animal Care and Use Committee. C57BL/6J mice were purchased from the Jackson Laboratories (000664; Bar Harbor, MA). TTPΔARE mice were generated as previously described and were maintained on a congenic C57Bl/6 genetic background. Mice were administered standard chow and water ad libitum and maintained in a temperature and humidity-controlled room with standard 12-hour light/dark cycles. Sham, adrenalectomy, and castration surgeries were performed at 8 weeks of age by the National Institute of Environmental Health Sciences Comparative Medicine Branch. After ADX, mice were maintained on 0.85% saline drinking water to maintain ionic homeostasis. HDT treatment was performed as previously described by Saenz et al. Briefly, mice received 3 consecutive intraperitoneal injections of 0.25 mg/g body weight tamoxifen (MilliporeSigma, Burlington, MA) every 24 hours. Stomach tissue was collected 24 hours after the final dose. Histology Mice were euthanized by cervical dislocation at the indicated time points. Stomachs were removed and opened along the greater curvature and washed in phosphate-buffered saline to remove gastric contents. Stomachs were fixed overnight in 4% paraformaldehyde at 4°C and then cryopreserved in 30% sucrose and embedded in optimal cutting temperature media. Histology and cell quantitation were performed as previously described. Briefly, 5-μm stomach cryosections were incubated with antibodies against the H+/K+ adenosine triphosphatase α subunit (clone 1H9; MBL International Corporation, Woburn, MA), MIST1 (clone D7N4B; Cell Signaling Technologies, Danvers, MA), CD45 (clone 104; BioLegend, San Diego, CA), CD44v9 (Cosmo Bio, Tokyo, Japan), CD68 (clone E307V; Cell Signaling Technologies), Siglec F (clone 1RNM44NN; eBiosciences, San Diego, CA), Ki67 (clone D3B5; Cell Signaling Technologies), or CTNNB1 (clone 14; BD BioSciences, Franklin Lakes, NJ) for 1 hour at room temperature or overnight at 4°C. After washing in phosphate-buffered saline with 0.1% Triton X-100 (Thermo Fisher Scientific, Waltham, MA), sections were incubated in secondary antibodies for 1 hour at room temperature. Fluorescent-conjugated GSII lectin (Thermo Fisher Scientific, Waltham, MA) was added with secondary antibodies. Sections were mounted with Vectastain mounting media containing 4′,6-diamidino-2-phenylindole to visualize nuclei (Vector Laboratories, Burlingame, CA). Images were obtained using a Zeiss 710 confocal laser-scanning microscope equipped with Airyscan (Carl-Zeiss GmbH, Jena, Germany) and running Zen Black (Carl-Zeiss GmbH) imaging software. Image Quantitation Parietal cells and chief cells were quantitated as previously described using confocal micrographs captured using a 20× microscope objective and 1-μm–thick optical sections. Cells were counted using the ImageJ (National Institutes of Health, Bethesda, MD) count tool. Cells that stained positive with anti-H+/K+ antibodies were identified as parietal cells, while cells that stained positive with anti-MIST1 antibodies and were GSII negative were identified as mature chief cells. Counts were reported as the number of cells observed per 20× field. Images were chosen that contained gastric glands cut longitudinally. Leukocytes were quantitated using Nikon Elements General Analysis (Nikon, Tokyo, Japan). Six tile-scanned images were captured using the 20× objective and stitched on Zen Black. Eosinophils were identified as CD45/Siglec F double-positive, while macrophages were CD45/CD68 double-positive. RNA Isolation and qRT-PCR RNA used for qRT-PCR and RNAseq was isolated from a 4-mm biopsy specimen of the gastric corpus lesser curvature. RNA was extracted in TRIzol (Thermo Fisher Scientific) and precipitated from the aqueous phase using 1.5 volumes of 100% ethanol. The mixture was transferred to a RNeasy column (Qiagen, Hilden, Germany), and the remaining steps were followed according to the RNeasy kit manufacturer’s recommendations. RNA was treated with RNase-free DNase I (Qiagen) as part of the isolation procedure. Reverse-transcription followed by qPCR was performed in the same reaction using the Universal Probes One-Step PCR kit (Bio-Rad Laboratories, Hercules, CA) and the TaqMan primers (Thermo Fisher Scientific) Cftr (Mm00445197_m1), Olfm4 (Mm01320260_m1), Wfdc2 (Mm00509434_m1), Zfp36 (Mm00457144_m1), Il33 (Mm00505403_m1), and Il13 (Mm00434204_m1) (Thermo Fisher Scientific) on a Quantstudio 6 (Thermo Fisher Scientific). mRNA levels were normalized to the reference gene Ppib (Mm00478295_m1). RNAseq RNA was isolated 5 days after sham surgery or adrenalectomy as described earlier. Four mice were used for each experimental group. Indexed samples were sequenced using the 75-bp paired-end protocol via the NextSeq500 (Illumina) per the manufacturer’s protocol. Raw reads (27–41 million pairs of reads per sample) were filtered using a custom perl script and the cutadapt program (v2.8) to remove low-quality reads and adapter sequences. Preprocessed reads were aligned to the University of California, Santa Cruz mm10 reference genome using STAR (v2.7.0f) with default parameters. The quantification results from featureCounts (available in Subread software, v1.6.4) then were analyzed with the Bioconductor package DESeq2, which fits a negative binomial distribution to estimate technical and biological variability. Comparisons were made between sham WT vs ADX WT, sham TTPΔARE vs ADX TTPΔARE, and ADX WT vs TTPΔARE. An abundance cut-off value was used so that only transcripts were evaluated whose average expression in the WT samples was greater than 0.1 fragments per kilobase of transcript per million mapped reads (FPKM). A transcript was considered differentially expressed if the adjusted P value was less than .05 and its expression changed -1.5-fold or less or 1.5-fold or more. Lists of significant transcripts were analyzed further using IPA (version 01-18-05; Qiagen). Enrichment or overlap was determined by IPA using the Fisher exact test ( P < .05). GSEA was performed using GSEA v4.0.3 software (Broad Institute, San Diego, CA) and Molecular Signatures Database v7.0. Transcripts were preranked based on their P value and their fold change of gene expression. This application scores a sorted list of transcripts with respect to their enrichment in selected functional categories (KEGG, Biocarta, Reactome and GO). The significance of the enrichment score was assessed using 1000 permutations. Benjamini and Hochberg’s false-discovery rate was calculated for multiple testing adjustments. A q value of 0.05 or less was considered significant. The heatmap was generated with the mean expression values of the 94 selected genes. The expression values were log 2 -transformed before subjecting to heatmap generation with scale by row in the pheatmap function available in R package pheatmap. The RNAseq data are available in the Gene Expression Omnibus repository at the National Center for Biotechnology Information (accession number: GSE164349 ; available at https://www.ncbi.nlm.nih.gov/geo ). Statistical Analysis All error bars are ± SD of the mean. The sample size for each experiment is indicated in the figure legends. Experiments were repeated a minimum of 2 times. Statistical analyses were performed using 1-way analysis of variance with the post hoc Tukey t test when comparing 3 or more groups or by an unpaired t test when comparing 2 groups. Statistical analysis was performed by GraphPad Prism 8 software (GraphPad Software, San Diego, CA). Statistical significance was set at P ≤ .05. Specific P values are listed in the figure legends.
All mouse studies were performed with approval by the National Institute of Environmental Health Sciences Animal Care and Use Committee. C57BL/6J mice were purchased from the Jackson Laboratories (000664; Bar Harbor, MA). TTPΔARE mice were generated as previously described and were maintained on a congenic C57Bl/6 genetic background. Mice were administered standard chow and water ad libitum and maintained in a temperature and humidity-controlled room with standard 12-hour light/dark cycles. Sham, adrenalectomy, and castration surgeries were performed at 8 weeks of age by the National Institute of Environmental Health Sciences Comparative Medicine Branch. After ADX, mice were maintained on 0.85% saline drinking water to maintain ionic homeostasis. HDT treatment was performed as previously described by Saenz et al. Briefly, mice received 3 consecutive intraperitoneal injections of 0.25 mg/g body weight tamoxifen (MilliporeSigma, Burlington, MA) every 24 hours. Stomach tissue was collected 24 hours after the final dose.
Mice were euthanized by cervical dislocation at the indicated time points. Stomachs were removed and opened along the greater curvature and washed in phosphate-buffered saline to remove gastric contents. Stomachs were fixed overnight in 4% paraformaldehyde at 4°C and then cryopreserved in 30% sucrose and embedded in optimal cutting temperature media. Histology and cell quantitation were performed as previously described. Briefly, 5-μm stomach cryosections were incubated with antibodies against the H+/K+ adenosine triphosphatase α subunit (clone 1H9; MBL International Corporation, Woburn, MA), MIST1 (clone D7N4B; Cell Signaling Technologies, Danvers, MA), CD45 (clone 104; BioLegend, San Diego, CA), CD44v9 (Cosmo Bio, Tokyo, Japan), CD68 (clone E307V; Cell Signaling Technologies), Siglec F (clone 1RNM44NN; eBiosciences, San Diego, CA), Ki67 (clone D3B5; Cell Signaling Technologies), or CTNNB1 (clone 14; BD BioSciences, Franklin Lakes, NJ) for 1 hour at room temperature or overnight at 4°C. After washing in phosphate-buffered saline with 0.1% Triton X-100 (Thermo Fisher Scientific, Waltham, MA), sections were incubated in secondary antibodies for 1 hour at room temperature. Fluorescent-conjugated GSII lectin (Thermo Fisher Scientific, Waltham, MA) was added with secondary antibodies. Sections were mounted with Vectastain mounting media containing 4′,6-diamidino-2-phenylindole to visualize nuclei (Vector Laboratories, Burlingame, CA). Images were obtained using a Zeiss 710 confocal laser-scanning microscope equipped with Airyscan (Carl-Zeiss GmbH, Jena, Germany) and running Zen Black (Carl-Zeiss GmbH) imaging software.
Parietal cells and chief cells were quantitated as previously described using confocal micrographs captured using a 20× microscope objective and 1-μm–thick optical sections. Cells were counted using the ImageJ (National Institutes of Health, Bethesda, MD) count tool. Cells that stained positive with anti-H+/K+ antibodies were identified as parietal cells, while cells that stained positive with anti-MIST1 antibodies and were GSII negative were identified as mature chief cells. Counts were reported as the number of cells observed per 20× field. Images were chosen that contained gastric glands cut longitudinally. Leukocytes were quantitated using Nikon Elements General Analysis (Nikon, Tokyo, Japan). Six tile-scanned images were captured using the 20× objective and stitched on Zen Black. Eosinophils were identified as CD45/Siglec F double-positive, while macrophages were CD45/CD68 double-positive.
RNA used for qRT-PCR and RNAseq was isolated from a 4-mm biopsy specimen of the gastric corpus lesser curvature. RNA was extracted in TRIzol (Thermo Fisher Scientific) and precipitated from the aqueous phase using 1.5 volumes of 100% ethanol. The mixture was transferred to a RNeasy column (Qiagen, Hilden, Germany), and the remaining steps were followed according to the RNeasy kit manufacturer’s recommendations. RNA was treated with RNase-free DNase I (Qiagen) as part of the isolation procedure. Reverse-transcription followed by qPCR was performed in the same reaction using the Universal Probes One-Step PCR kit (Bio-Rad Laboratories, Hercules, CA) and the TaqMan primers (Thermo Fisher Scientific) Cftr (Mm00445197_m1), Olfm4 (Mm01320260_m1), Wfdc2 (Mm00509434_m1), Zfp36 (Mm00457144_m1), Il33 (Mm00505403_m1), and Il13 (Mm00434204_m1) (Thermo Fisher Scientific) on a Quantstudio 6 (Thermo Fisher Scientific). mRNA levels were normalized to the reference gene Ppib (Mm00478295_m1).
RNA was isolated 5 days after sham surgery or adrenalectomy as described earlier. Four mice were used for each experimental group. Indexed samples were sequenced using the 75-bp paired-end protocol via the NextSeq500 (Illumina) per the manufacturer’s protocol. Raw reads (27–41 million pairs of reads per sample) were filtered using a custom perl script and the cutadapt program (v2.8) to remove low-quality reads and adapter sequences. Preprocessed reads were aligned to the University of California, Santa Cruz mm10 reference genome using STAR (v2.7.0f) with default parameters. The quantification results from featureCounts (available in Subread software, v1.6.4) then were analyzed with the Bioconductor package DESeq2, which fits a negative binomial distribution to estimate technical and biological variability. Comparisons were made between sham WT vs ADX WT, sham TTPΔARE vs ADX TTPΔARE, and ADX WT vs TTPΔARE. An abundance cut-off value was used so that only transcripts were evaluated whose average expression in the WT samples was greater than 0.1 fragments per kilobase of transcript per million mapped reads (FPKM). A transcript was considered differentially expressed if the adjusted P value was less than .05 and its expression changed -1.5-fold or less or 1.5-fold or more. Lists of significant transcripts were analyzed further using IPA (version 01-18-05; Qiagen). Enrichment or overlap was determined by IPA using the Fisher exact test ( P < .05). GSEA was performed using GSEA v4.0.3 software (Broad Institute, San Diego, CA) and Molecular Signatures Database v7.0. Transcripts were preranked based on their P value and their fold change of gene expression. This application scores a sorted list of transcripts with respect to their enrichment in selected functional categories (KEGG, Biocarta, Reactome and GO). The significance of the enrichment score was assessed using 1000 permutations. Benjamini and Hochberg’s false-discovery rate was calculated for multiple testing adjustments. A q value of 0.05 or less was considered significant. The heatmap was generated with the mean expression values of the 94 selected genes. The expression values were log 2 -transformed before subjecting to heatmap generation with scale by row in the pheatmap function available in R package pheatmap. The RNAseq data are available in the Gene Expression Omnibus repository at the National Center for Biotechnology Information (accession number: GSE164349 ; available at https://www.ncbi.nlm.nih.gov/geo ).
All error bars are ± SD of the mean. The sample size for each experiment is indicated in the figure legends. Experiments were repeated a minimum of 2 times. Statistical analyses were performed using 1-way analysis of variance with the post hoc Tukey t test when comparing 3 or more groups or by an unpaired t test when comparing 2 groups. Statistical analysis was performed by GraphPad Prism 8 software (GraphPad Software, San Diego, CA). Statistical significance was set at P ≤ .05. Specific P values are listed in the figure legends.
|
Diretriz de Miocardites da Sociedade Brasileira de Cardiologia – 2022 | bf21b349-d36b-4292-a388-87efcc447f7a | 9352123 | Internal Medicine[mh] | 1. Epidemiologia 149 2. Definição e Etiologia 150 2.1. Fator Genético na Etiopatogenia das Miocardites 151 3. Fisiopatogenia 151 4. Avaliação Diagnóstica 152 4.1. Critérios Diagnósticos de Suspeita de Miocardite 152 4.1.1. Fluxograma de Avaliação Diagnóstica 152 4.2. Avaliação Clínica: Situações Clínicas de Suspeição 152 4.3. Biomarcadores 154 4.3.1. Marcadores Laboratoriais de Agressão Inflamatória 154 4.3.2. Marcadores Laboratoriais de Pesquisa Etiopatogênica 155 4.4. Eletrocardiograma 155 4.4.1. Critério de diagnóstico por eletrocardiograma/ Holter/testes de estresse 156 4.4.2. Prognóstico 157 4.5. Eletrocardiograma 157 4.6. Ressonância Magnética Cardíaca 157 4.7. Medicina Nuclear 158 4.7.1. Radiotraçadores para Cintilografia por Emissão de Fóton Único (SPECT) 159 4.7.2. Radiotraçadores para Tomografia por Emissão de Pósitrons (SPECT) 160 4.7.3. Perspectivas Adicionais 161 4.8. Angiotomografia de Coronárias e Coronariografia 161 4.9. Biópsia Endomiocárdica: Indicações, Técnica e Complicações 162 4.9.1. Ponderações para Indicação 162 4.9.2. Prognóstico 163 4.9.3. Técnica 163 4.9.4. Complicações 164 4.10. Análise Histológica e Pesquisa Viral – Biologia Molecular e Genoma 164 4.10.1. Análise Histológica 164 4.10.2. Análise Imuno-histoquímica 164 4.10.3. Análise do Perfil Genético 165 4.10.4. Virologia 164 5. Tratamento 164 5.1. Fluxogramas Terapêuticos 164 5.2. Imunossupressão: Indicações e Tipos 164 5.3. Antivirais: Indicações e Tipos 168 5.4. IImunomodulação (Imunoglobulina – Imunoadsorção): Indicações e Tipos de Imunoglobulinas 169 5.4.1. Imunoadsorção 169 5.5. Terapêutica Cardioprotetora Convencional 169 5.5.1. Sem Disfunção Ventricular 169 5.5.2. Com Disfunção Ventricular Hemodinâmica Estável 170 5.5.3. Paciente com Disfunção Ventricular e Hemodinâmica Instável: Abordagem Terapêutica 170 5.6. Cuidados Gerais: Atividade Física e Vacinação 170 6. Situações Especiais 171 6.1. Miocardite Fulminante 171 6.1.1. Avaliação Diagnóstica 172 6.1.2. Abordagem Terapêutica 173 6.2. Sarcoidose 173 6.2.1. Diagnóstico 173 6.2.2. Tratamento e Prognóstico 175 6.2.3. Prognóstico 175 6.3. Células Igantes 176 6.3.1. Tratamento 176 6.3.2. Manifestação Clínica e Diagnóstico 178 6.4. Miocardite chagásica aguda e reagudização 179 6.4.1. Manifestações Clínicas e meios de Infecção, Reagudização nos Pacientes Imunossuprimidos 179 6.4.2. Diagnóstico 179 6.4.3. Tratamento 179 6.5. Miocardite por Doenças Tropicais 180 6.6. Miocardite por Covid-19 181 6.6.1. Possível Fisiopatologia da Miocardite Relacionada ao SARS-CoV-2 181 6.6.2. Lesão Miocárdica Viral Direta 181 6.6.3. Diagnóstico de Miocardite Relacionada à Covid-19 182 6.6.4. Laboratório 183 6.6.5. Eletrocardiograma 183 6.6.6. Imagem 184 6.6.7. Biópsia Endomiocárdica 184 6.7. Cardiotoxidade Aguda por Terapêutica Antineoplásica 185 6.7.1. Agentes Antineoplásicos Indutores de Cardiotoxidade Aguda 185 6.7.2. Diagnóstico da Cardiotoxidade Aguda 186 6.7.3. Tratamento da Cardiotoxidade Aguda 186 6.7.4. Prognóstico 188 6.7.5. Prevenção 188 6.8. Miocardite em Crianças e Adolescentes 189 6.8.1. Fatores Causais 189 6.8.2. Prognóstico 190 6.9. Miocardites com Envolvimento Pericárdico 192 6.9.1. Diagnóstico e Tratamento 192 6.10. Miocardite Simulando Infarto Agudo do Miocárdio 192 7. Cardite Reumática 193 8. Miocardites por Doenças Autoimunes 195 9. Manejo das Arritmias Cardíacas na Miocardite 196 9.1. Avaliação Não Invasiva e Invasiva das Arritmias na Fase Aguda e Crônica das Diversas Causa das Miocardites 196 9.2. Tratamento de Arritmias e Prevenção da Morte Súbita na Fase Aguda e Subaguda 197 10. Avaliação Prognóstica e Seguimento 198 10.1. Marcadores de Prognóstico e Evolução 198 10.2 Seguimento Ambulatorial nas Avaliações dos Métodos Complementares 198 Referências 199
A real incidência de miocardite é difícil de ser determinada, uma vez que as apresentações clínicas são muito heterogêneas e grande parcela dos casos cursa de forma subclínica, além de haver uma frequência muito baixa de emprego da biópsia endomiocárdica (BEM), o padrão-ouro para o diagnóstico. Levantamento de diferentes séries de estudos necroscópicos em indivíduos jovens vítimas de morte súbita inexplicada mostrou incidência muito variável de miocardite, podendo corresponder por até 42% dos casos. O Global Burden of Disease Study 2013 usou os códigos da Classificação Internacional de Doenças em análises estatísticas regionais e globais de 187 países, estimando a incidência anual de miocardite em torno de 22 casos para cada 100.000 pacientes atendidos. Em coortes de pacientes com apresentação clínica de miocardiopatia dilatada de etiologia não definida, miocardite comprovada por BEM pode ser detectada em até 16% dos pacientes adultos, e em até 46% de pacientes pediátricos. Muitos estudos indicam maior prevalência da miocardite aguda em homens do que em mulheres , Alguns estudos sugerem que, em adultos, a manifestação clínica mais comum seja a miocardite linfocítica, com mediana de idade de 42 anos, enquanto pacientes com miocardite de células gigantes têm mediana de 43 anos de idade. Entretanto, recém-nascidos e crianças exibem mais tipicamente apresentação de miocardite fulminante e são mais suscetíveis à patogenicidade induzida por vírus do que adultos. , A miocardite engloba um amplo espectro de prognósticos, dependendo da gravidade do quadro clínico inicial e da sua etiologia. Pacientes com sintomas leves e sem disfunção ventricular exibem com grande frequência resolução espontânea e excelente prognóstico. No entanto, estima-se que cerca de 30% dos casos de miocardite mais graves, documentados com BEM e cursando com disfunção ventricular evoluam para miocardiopatia dilatada e insuficiência cardíaca (IC) com prognóstico reservado. Em pacientes pediátricos, o prognóstico parece ser pior, com relatos de sobrevida em 10 anos livre de transplante cardíaco de apenas 60%.
A miocardite pode ser definida como doença inflamatória do miocárdio, diagnosticada por critérios histológicos, imunológicos e imuno-histoquímicos. Os critérios histológicos incluem evidência de infiltrado inflamatório envolvendo o miocárdio associado com degeneração e necrose de cardiomiócitos e de origem não isquêmica. Os critérios imuno-histoquímicos quantitativos para identificar um infiltrado inflamatório anormal, indicativos de miocardite ativa, são: contagem de leucócitos ≥14 células/mm 2 , incluindo até 4 monócitos/mm 2 , com a presença de linfócitos-T CD3 positivos ≥7 células/mm 2 . Adicionalmente, conforme o tipo celular, o tipo de infiltrado inflamatório observado no diagnóstico histológico pode classificar a miocardite em linfocítica, eosinofílica, polimórfica, miocardite de células gigantes e sarcoidose cardíaca. Miocardite pode ser causada por uma grande variedade de agentes infecciosos, incluindo vírus, protozoários, bactérias, clamídias, rickéttsias, fungos e espiroquetas ( ), bem como pode ser desencadeada por mecanismos não infecciosos como a miocardite tóxica (drogas, metais pesados, radiação), miocardite por mecanismos autoimunes e de hipersensibilidade (miocardite eosinofílica, colagenoses, induzida por vírus, rejeição do coração transplantado). , Dentre todos esses desencadeantes de miocardite, a infecção viral é a mais prevalente, particularmente em crianças. Os vírus cardiotrópicos mais prevalentes são: enterovírus, parvovírus B19 (PVB19), adenovírus, vírus influenza A, herpes-vírus humano (HHV), vírus Epstein-Barr, citomegalovírus, vírus da hepatite C e vírus do HIV. Algumas evidências sugerem que possa haver diferenças regionais em relação à prevalência dos diferentes agentes virais, com predomínio de adenovírus, parvovírus e herpes na população europeia e preponderância de enterovírus na população americana. Entretanto, parte dessas diferenças epidemiológicas pode ser decorrente de surtos de infecções virais específicas ocorrendo ao longo dos anos nas diversas regiões do mundo, bem como a diferenças nas técnicas de detecção viral, permanecendo o debate acerca da real influência da distribuição geográfica quanto às infecções virais cardiotrópicas. Na América do Sul e, especialmente, em algumas regiões do Brasil, a miocardite chagásica causada pelo protozoário Trypanosoma cruzi é uma das causas mais prevalentes de miocardite aguda, particularmente frente ao registro recente de surtos de casos associados à transmissão oral na Amazônia brasileira. Doenças sistêmicas autoimunes como a síndrome de Churg-Strauss e a síndrome hipereosinofílica estão associadas à miocardite eosinofílica. A miocardite de células gigantes e a sarcoidose, embora raras, revestem-se de especial importância clínica, uma vez que, se diagnosticadas precocemente, têm tratamento específico, o que pode garantir melhora do prognóstico. , Miocardite autoimune pode ocorrer como acometimento orgânico isolado ou manifestar-se no contexto de doenças autoimunes com manifestações sistêmicas, sendo as mais frequentes: sarcoidose, síndrome hipereosinofílica, esclerodermia e lúpus eritematoso sistêmico. Novos imunoterápicos para tratamento do câncer podem estar associados ao risco de miocardite, sendo mais recentemente reconhecidos os casos vinculados ao uso dos inibidores de checkpoint imunológico, como nivolumab e ipilimumab. - 2.1. Fator Genético na Etiopatogenia das Miocardites Nas descrições clássicas da etiopatogenia da miocardite, as evidências de mecanismos envolvendo a atuação de vírus e reações autoimunes são bem documentadas. Pouco se fala a respeito da predisposição genética. Muitos autores acreditam que é provável que fenômenos genéticos possam contribuir para o desenvolvimento de miocardite viral e/ou autoimune. , Dados laboratoriais consistentes com este argumento foram documentados em um estudo com 342 familiares de pacientes com cardiomiopatia dilatada (CMPD), em que se constatou a presença de anticorpos cardíacos com intensidade maior do que foi observado no grupo controle. Além disso, também é largamente reconhecida a probabilidade de uma interação complexa entre causas genéticas (predisposição individual) e não genéticas (ligadas ao agente agressor) na evolução final para cardiomiopatia dilatada. O problema é que as evidências científicas que suportam tal argumento são escassas. Há evidências de que, em cepas de camundongos suscetíveis, a infecção e a inflamação desencadeiam reações autoimunes no coração, geralmente como resultado da necrose dos miócitos e subsequente liberação de autoantígenos anteriormente ocultos no sistema imunológico. As mesmas linhagens de animais geneticamente predispostas desenvolvem miocardite linfocítica ou de célula gigante autoimune e depois cardiomiopatia dilatada após imunização com autoantígenos cardíacos (p. ex., miosina cardíaca). Além disso, há a evidência de que a miocardite pode estar presente em cardiomiopatias específicas (p. ex., cardiomiopatia arritmogênica), levando a alterações no fenótipo e progressão abrupta da doença. Nesse contexto, algumas mutações podem aumentar a suscetibilidade à miocardite. Entretanto, no geral, a miocardite segue classificada como um distúrbio adquirido não familiar, com evidências de estudos experimentais que indicam que alterações genéticas possam proporcionar suscetibilidade maior a esta doença.
Nas descrições clássicas da etiopatogenia da miocardite, as evidências de mecanismos envolvendo a atuação de vírus e reações autoimunes são bem documentadas. Pouco se fala a respeito da predisposição genética. Muitos autores acreditam que é provável que fenômenos genéticos possam contribuir para o desenvolvimento de miocardite viral e/ou autoimune. , Dados laboratoriais consistentes com este argumento foram documentados em um estudo com 342 familiares de pacientes com cardiomiopatia dilatada (CMPD), em que se constatou a presença de anticorpos cardíacos com intensidade maior do que foi observado no grupo controle. Além disso, também é largamente reconhecida a probabilidade de uma interação complexa entre causas genéticas (predisposição individual) e não genéticas (ligadas ao agente agressor) na evolução final para cardiomiopatia dilatada. O problema é que as evidências científicas que suportam tal argumento são escassas. Há evidências de que, em cepas de camundongos suscetíveis, a infecção e a inflamação desencadeiam reações autoimunes no coração, geralmente como resultado da necrose dos miócitos e subsequente liberação de autoantígenos anteriormente ocultos no sistema imunológico. As mesmas linhagens de animais geneticamente predispostas desenvolvem miocardite linfocítica ou de célula gigante autoimune e depois cardiomiopatia dilatada após imunização com autoantígenos cardíacos (p. ex., miosina cardíaca). Além disso, há a evidência de que a miocardite pode estar presente em cardiomiopatias específicas (p. ex., cardiomiopatia arritmogênica), levando a alterações no fenótipo e progressão abrupta da doença. Nesse contexto, algumas mutações podem aumentar a suscetibilidade à miocardite. Entretanto, no geral, a miocardite segue classificada como um distúrbio adquirido não familiar, com evidências de estudos experimentais que indicam que alterações genéticas possam proporcionar suscetibilidade maior a esta doença.
De forma simplificada, podemos dividir a fisiopatologia das miocardites em infecciosas e não infecciosas. As infecciosas são as mais comuns e incluem uma enorme gama de vírus, bactérias, protozoários, fungos e outros patógenos mais raros (ver ). Os agentes virais são os mais comumente envolvidos e estudados experimentalmente. Do ponto de vista não infeccioso, a autoimunidade está presente, mediante doenças específicas, drogas e autoanticorpos; a predisposição genética exerce papel importante em ambas (ver ). Modelos murinos de miocardite viral sugerem que seu desenvolvimento apresenta três fases: aguda (alguns dias), subaguda (algumas semanas a meses) e crônica (desenvolvimento da miocardiopatia dilatada); além disso, dois mecanismos patogênicos são descritos: lesão citopática direta induzida pelos microrganismos e resposta imune anticardíaca induzida pelos microrganismos. A fase 1 corresponde à infecção inicial, que pode curar até mesmo sem sequela ou levar à IC ou morte, ou progredir para as fases 2/3. Na maioria dos pacientes com miocardite viral, o patógeno é eliminado e o sistema imune reduz sua atividade sem outras complicações adicionais. Entretanto, em uma minoria de pacientes, o vírus não é eliminado, resultando em lesão miocárdica persistente e inflamação secundária à produção de anticorpos. Assim, a miocardite viral poderia ser considerada um dos precursores para o desenvolvimento da miocardiopatia dilatada, sendo observada essa evolução em 21% dos casos de miocardite ao final de 3 anos. Os enterovírus, em especial o Coxsackie B3 (CVB3), iniciam a miocardite por meio do acoplamento ao receptor CAR (Coxsackie vírus e adenovírus receptor) e DAF (do inglês, decay accelarating fator ), culminado em morte celular através de apoptose ou necroptose. Cardiomiócitos infectados tornam-se lisados, o que resulta em liberação citosólica de proteínas e produtos virais. Após a fase aguda, o curso da doença depende da base genética, podendo evoluir para miocardiopatia dilatada ou haver resolução. - Infecção por Coxsackie ativa respostas imunes inatas e adaptativas, incluindo, em primeiro momento, a produção de interferon e ativação de receptores toll-like (TLR). Na resposta adaptativa, a deficiência de células T e B leva à persistência viral e piora evolutiva. , Outro aspecto importante é a produção de autoanticorpos específicos contra os cardiomiócitos, que ocorre por meio da liberação de peptídios cardíacos, havendo mimetismo molecular entre as proteínas cardíacas e os agentes virais. Na presença de citocinas coestimuladoras como TNF e IL1, esses anticorpos promovem resposta efetora de linfócitos T. Outros vírus como o parvovírus B19 e o herpes-vírus-6 têm sido cada vez mais descritos em biópsias cardíacas, havendo uma tendência de redução de identificação de enterovírus e adenovírus. No entanto, a presença desses microrganismos também tem sido observada em corações sem miocardite ou miocardiopatias de outras etiologias, tornando complexa a interpretação da associação entre a presença de agentes infecciosos no tecido cardíaco e o desenvolvimento de miocardite, bem como a influência da persistência desses agentes na evolução clínica. Em relação às miocardites não infecciosas, modelos animais de miocardite autoimune envolvem linhagens suscetíveis geneticamente que demonstram a presença de linfócitos T CD4+ reativos a autoantígenos, como a cadeia pesada da miosina, na ausência de agentes infecciosos. Além da resposta autoimune linfocitária, podemos observar respostas envolvendo macrófagos, como nas miocardites granulomatosas e eosinófilos nas situações de hipersensibilidade. A miocardite de células gigantes é uma forma autoimune de agressão miocárdica e caracteriza-se histologicamente por um infiltrado de células gigantes multinucleadas, além de infiltrado inflamatório de células T, eosinófilos e histiócitos. A presença marcante de células CD8 (citotóxicas), liberação de citocinas inflamatórias e mediadores do estresse oxidativo leva a uma intensa agressão às células miocíticas e reposição por fibrose, culminando em rápida perda da função ventricular e evolução clínica desfavorável. Em 20% dos casos, existe associação com doenças autoimunes como tireoidite de Hashimoto, artrite reumatoide, miastenia grave, arterite de Takayasu, entre outras. A sarcoidose tem caráter multissistêmico, envolvendo o pulmão em 90% dos casos, associada a acúmulo de linfócitos T, fagócitos mononucleares e granulomas não caseosos nos tecidos envolvidos. , Na miocardite induzida por drogas, a resposta de sensibilidade pode variar de horas a meses. Parte da justificativa da hipersensibilidade se dá em resposta a componentes quimicamente reativos que se ligam a proteínas promovendo modificações estruturais. Essas partículas são fagocitadas pelas células de defesa, por vezes macrófagos, os quais as apresentam na superfície dessas células aos linfócitos T. Como uma resposta de hipersensibilidade retardada, são liberadas citocinas como interleucina 5, estimulante de eosinófilos. Esse acúmulo de interleucina 5 promove um grande infiltrado eosinofílico com aumento da resposta de hipersensibilidade e maior lesão miocárdica. A predisposição genética parece favorecer esse padrão de resposta. A síndrome hipereosinofílica pode ocorrer em associação a diversas doenças com manifestação sistêmica, como síndrome de Churg-Strauss, câncer, infecções parasitárias e helmínticas, ou estar relacionada a vacinações. Estas podem promover uma resposta inflamatória intensa no miocárdio, levando à lesão celular com disfunção e IC. , Do ponto de vista fisiopatológico, assim como em outros órgãos, ocorre um intenso infiltrado eosinofílico no miocárdio, infiltrado este que promove a liberação de mediadores altamente agressivos ao miócito, levando a necrose e perda da estrutura miocárdica. Entre os fatores agressores, estão a neurotoxina, derivada dos eosinófilos, a proteína catiônica do eosinófilo e a protease eosinofílica. Além desses fatores, a produção de citocinas inflamatórias como IL 1, TNF-alfa, IL 6, IL 8, IL3, IL5 e proteínas inflamatórias do macrófago promove a lesão e perda de miócitos, com evolução para disfunção miocárdica. Mais recentemente, o nivolumab, droga antitumoral que atua como inibidor de checkpoint , tem sido considerado como causa de miocardite linfocitária. Possível mecanismo fisiopatológico sugere que as células miocárdicas poderiam compartilhar antígenos com as células tumorais, sendo, consequentemente, alvos de células T ativadas, resultando em infiltrado inflamatório e desenvolvimento de IC e distúrbios de condução.
4.1. Critérios Diagnósticos de Suspeita de Miocardite A suspeita clínica do diagnóstico de miocardite pelo consenso do grupo de doenças do miocárdio e pericárdio da sociedade europeia de cardiologia baseia-se na associação da apresentação clínica com exames complementares alterados sugestivos de lesão inflamatória miocárdica. , Por meio de análise das apresentações clínicas mais frequentes da miocardite e na acurácia diagnóstica dos métodos de avaliação complementar em prognosticar a presença de agressão inflamatória miocárdica, propõe-se estratificar a suspeita clínica diagnóstica de miocardite em três níveis: baixa, intermediária e alta suspeição diagnóstica ( ). , - Esses critérios de suspeição foram estabelecidos por consenso de especialistas, e necessitam de validação futura por registros clínicos ou estudos multicêntricos. 4.1.1. Fluxograma de Avaliação Diagnóstica O fluxograma de avaliação diagnóstica da miocardite se baseia no grau de suspeita clínica e prognóstica do paciente (ver ). Os pacientes com baixa suspeita clínica apresentam um prognóstico favorável, sendo seguidos em acompanhamento clínico e avaliados quanto à necessidade de extratificação não invasiva de doença arterial coronariana (DAC). Os pacientes com suspeita intermediária com evolução clínica favorável têm a mesma linha de seguimento clínico e de investigação diagnóstica que os pacientes de baixo risco. Os pacientes que evoluem com manutenção ou piora clínica, função ventricular, arritmias ou bloqueio AV devem ser submetidos a coronariografia (CAT) e BEM. Os pacientes com alta suspeita diagnóstica, em geral, apresentam um pior prognóstico, devem ser submetidos a CAT e BEM, para definição etiológica, com objetivo de definir um tratamento específico para melhora do prognóstico. , , ,
A suspeita clínica do diagnóstico de miocardite pelo consenso do grupo de doenças do miocárdio e pericárdio da sociedade europeia de cardiologia baseia-se na associação da apresentação clínica com exames complementares alterados sugestivos de lesão inflamatória miocárdica. , Por meio de análise das apresentações clínicas mais frequentes da miocardite e na acurácia diagnóstica dos métodos de avaliação complementar em prognosticar a presença de agressão inflamatória miocárdica, propõe-se estratificar a suspeita clínica diagnóstica de miocardite em três níveis: baixa, intermediária e alta suspeição diagnóstica ( ). , - Esses critérios de suspeição foram estabelecidos por consenso de especialistas, e necessitam de validação futura por registros clínicos ou estudos multicêntricos. 4.1.1. Fluxograma de Avaliação Diagnóstica O fluxograma de avaliação diagnóstica da miocardite se baseia no grau de suspeita clínica e prognóstica do paciente (ver ). Os pacientes com baixa suspeita clínica apresentam um prognóstico favorável, sendo seguidos em acompanhamento clínico e avaliados quanto à necessidade de extratificação não invasiva de doença arterial coronariana (DAC). Os pacientes com suspeita intermediária com evolução clínica favorável têm a mesma linha de seguimento clínico e de investigação diagnóstica que os pacientes de baixo risco. Os pacientes que evoluem com manutenção ou piora clínica, função ventricular, arritmias ou bloqueio AV devem ser submetidos a coronariografia (CAT) e BEM. Os pacientes com alta suspeita diagnóstica, em geral, apresentam um pior prognóstico, devem ser submetidos a CAT e BEM, para definição etiológica, com objetivo de definir um tratamento específico para melhora do prognóstico. , , ,
O fluxograma de avaliação diagnóstica da miocardite se baseia no grau de suspeita clínica e prognóstica do paciente (ver ). Os pacientes com baixa suspeita clínica apresentam um prognóstico favorável, sendo seguidos em acompanhamento clínico e avaliados quanto à necessidade de extratificação não invasiva de doença arterial coronariana (DAC). Os pacientes com suspeita intermediária com evolução clínica favorável têm a mesma linha de seguimento clínico e de investigação diagnóstica que os pacientes de baixo risco. Os pacientes que evoluem com manutenção ou piora clínica, função ventricular, arritmias ou bloqueio AV devem ser submetidos a coronariografia (CAT) e BEM. Os pacientes com alta suspeita diagnóstica, em geral, apresentam um pior prognóstico, devem ser submetidos a CAT e BEM, para definição etiológica, com objetivo de definir um tratamento específico para melhora do prognóstico. , , ,
A miocardite pode se manifestar de diferentes formas, variando desde quadro leve e oligossintomático até quadro grave, associado a arritmias ventriculares, instabilidade hemodinâmica e choque cardiogênico. Raramente, pode se apresentar como morte súbita (variando de 8,6% a 12%), principalmente na infância ou em adultos jovens. , O quadro mais comum ocorre em pacientes jovens com dor torácica sugestiva de infarto agudo do miocárdio (IAM) com coronárias normais após infecção viral respiratória ou intestinal, embora os sintomas virais nem sempre precedam os quadros de miocardite (pode variar de 10% a 80% dos casos). - Apesar de ocorrer predominantemente em pacientes jovens, a síndrome pode surgir em qualquer idade. Pode ocorrer também quadro de miocardite subclínica, elevação transitória de troponina ou alterações eletrocardiográficas após quadro viral agudo, o qual se manifesta com sintomas inespecíficos como febre, mialgia, sintomas respiratórios ou gastrointestinais. , Há diferentes formas de apresentação dessa doença: , , Quadro clínico semelhante à síndrome coronariana aguda (dor torácica, alterações eletrocardiográficas sugestiva de isquemia; elevação de marcadores de necrose miocárdica com coronárias normais). Sintomas novos agudos de IC (entre 3 dias e 3 meses) na ausência de doença coronariana ou causa conhecida para os sintomas. Sintomas de IC de início recente nos últimos meses (> 3 meses) na ausência de doença coronariana ou causa conhecida para os sintomas. Condições ameaçadoras da vida: arritmias ventriculares inexplicadas e/ou síncope e/ou morte súbita abortada; choque cardiogênico sem doença coronariana associada.
Pacientes que se apresentam com dor torácica podem ter alterações eletrocardiográficas variáveis: supra ou infradesnivelamento do segmento ST; inversão de onda T ou ondas Q patológicas. Alterações segmentares ao ecodopplercardiograma e elevação de marcadores de necrose miocárdica, especialmente troponina, em pacientes com coronárias normais sugerem a hipótese de miocardite. , Na maioria dos estudos, esses pacientes têm bom prognóstico em curto prazo, sendo o grau de comprometimento ventricular preditor de risco de morte. , Uma minoria desenvolve miopericardite persistente e recorrente com função de ventrículo esquerdo (VE) normal que podem responder à colchicina.
A forma de apresentação pode ser aguda, associada ao aparecimento dos sintomas de IC em dias, mas também subagudo/crônico, cardiomiopatia de início recente em paciente sem causa aparente para a alteração de função miocárdica. A apresentação da miocardite por sintomas de IC (dispneia, fadiga, intolerância ao exercício) pode ocorrer associada a leve comprometimento da função ventricular (fração de ejeção de ventrículo esquerdo [FEVE] entre 40% e 50%) que melhora em semanas a meses. Contudo, número menor de pacientes pode apresentar disfunção ventricular importante (FEVE <35%) e, destes, 50% desenvolvem disfunção de VE crônica; cerca de 25% necessitarão de transplante cardíaco ou dispositivo de assistência ventricular, enquanto os outros 25% terão melhora da função ventricular ao longo do seguimento; uma minoria dos casos pode evoluir com quadro de choque cardiogênico e necessidade de suporte circulatório mecânico. , - O risco de morte ou necessidade de transplante está fortemente associado ao grau de comprometimento hemodinâmico e da função ventricular esquerda e direita, que podem responder ao tratamento medicamentoso padrão para IC. A apresentação da doença na forma fulminante caracteriza-se pelo aparecimento abrupto (dias) dos sintomas de IC avançada. Esses pacientes, em geral, têm grave disfunção ventricular com pouca alteração dos diâmetros ventriculares. Trata-se de apresentação dramática que necessita de intervenção precoce. , Quando o quadro fulminante está associado à taquicardia ventricular persistente ou não há resposta à terapêutica padrão, o prognóstico é pior, e formas mais graves de miocardite, como miocardite de células gigantes, devem ser consideradas e investigadas.
Miocardite confirmada por critério imuno-hitopatológico está presente em até 40% dos pacientes com cardiomiopatia crônica que persistem sintomáticos a despeito do tratamento medicamentoso. A presença de inflamação acessada por histologia está associada a pior prognóstico.
Arritmias ou distúrbios de condução Pacientes com miocardite podem, ainda, apresentar distúrbios do sistema de condução, tais como bloqueio atrioventricular (BAV) de 2° ou 3° grau ou total, principalmente aqueles que apresentam sinais ecocardiográficos de hipertrofia por edema intersticial. A presença de bloqueio cardíaco ou arritmias ventriculares sintomáticas ou sustentadas em pacientes com cardiomiopatia deve levantar a suspeita de miocardite com causa definida (doença de Lyme; sarcoidose; displasia arritmogênica de ventrículo direito ou Chagas em áreas endêmicas). Choque cardiogênico Subgrupo pequeno de pacientes que se apresentam com quadro súbito de IC dentro de 2 semanas de quadro viral pode precisar de suporte inotrópico e/ou suporte circulatório mecânico. Em geral, ocorre recuperação da função ventricular quando sobrevivem ao quadro inicial, porém necessitam de instituição da terapêutica adequada o mais precoce possível. , A resume as principais síndromes clínicas de suspeição de miocardite e sugere possíveis agentes responsáveis por cada forma de apresentação da doença.
4.3.1. Marcadores Laboratoriais de Agressão Inflamatória Nenhum biomarcador, isoladamente, é suficiente para diagnosticar miocardite; contudo, alguns biomarcadores podem ser úteis como marcadores prognósticos. A seguir, comentaremos acerca dos principais biomarcadores usados nessa avaliação. Marcadores inflamatórios. Contagem de leucócitos, velocidade de hemossedimentação (VHS) e proteína C reativa (PC-R) podem estar elevadas em pacientes com miocardite. No entanto, não apresentam valor diagnóstico, pois são inespecíficos. Troponinas. As troponinas são mais específicas que CPK e CKMB para lesões miocárdicas e estão frequentemente elevadas em pacientes com miocardite. No entanto, troponinas normais não excluem o diagnóstico. Embora não sejam suficientes para estabelecer o diagnóstico de miocardite, podem sugerir o diagnóstico, desde que excluídas causas óbvias como IAM e IC aguda. Em um estudo pequeno, em que vários biomarcadores foram avaliados, troponinas foram preditores do diagnóstico de miocardite confirmada por biópsia, com área sob a curva de 0,87, sensibilidade de 83% e especificidade de 80%. Troponina é útil para o diagnóstico de miocardite em pacientes com miocardiopatia de instalação aguda. , Peptídios natriuréticos. BNP e NT-proBNP podem estar elevados na miocardite. No entanto, não são úteis para confirmação diagnóstica, uma vez que se elevam frente a diferentes causas de IC. Contudo, podem ser marcadores prognósticos. Em um estudo com miocardite confirmada por biópsia, entre vários biomarcadores avaliados, somente o NT-proBNP acima do último quartil (>4.245pg/mL) foi preditor de morte ou transplante cardíaco. 4.3.2. Marcadores Laboratoriais de Pesquisa Etiopatogênica Sorologias virais. São de valor limitado no diagnóstico de miocardite, uma vez que anticorpos IgG de vírus cardiotrópicos são muito prevalentes na população geral na ausência de doença cardíaca viral. Em um estudo, não se observou correlação entre sorologia viral e achados da biópsia. Em situações específicas, podem ser úteis a sorologia para hepatite C, pesquisa de vírus HIV em indivíduos de alto risco e doença de Lyme em áreas endêmicas. A pesquisa de marcadores sorológicas deve ser ditada pela elevada suspeição clínica para aquela doença ( ). Marcadores imuno-histoquímicos e análise de genoma viral. São superiores aos critérios de Dallas e, portanto, úteis no diagnóstico etiológico. A taxa de complicações com a BEM é baixa ( ). -
Nenhum biomarcador, isoladamente, é suficiente para diagnosticar miocardite; contudo, alguns biomarcadores podem ser úteis como marcadores prognósticos. A seguir, comentaremos acerca dos principais biomarcadores usados nessa avaliação. Marcadores inflamatórios. Contagem de leucócitos, velocidade de hemossedimentação (VHS) e proteína C reativa (PC-R) podem estar elevadas em pacientes com miocardite. No entanto, não apresentam valor diagnóstico, pois são inespecíficos. Troponinas. As troponinas são mais específicas que CPK e CKMB para lesões miocárdicas e estão frequentemente elevadas em pacientes com miocardite. No entanto, troponinas normais não excluem o diagnóstico. Embora não sejam suficientes para estabelecer o diagnóstico de miocardite, podem sugerir o diagnóstico, desde que excluídas causas óbvias como IAM e IC aguda. Em um estudo pequeno, em que vários biomarcadores foram avaliados, troponinas foram preditores do diagnóstico de miocardite confirmada por biópsia, com área sob a curva de 0,87, sensibilidade de 83% e especificidade de 80%. Troponina é útil para o diagnóstico de miocardite em pacientes com miocardiopatia de instalação aguda. , Peptídios natriuréticos. BNP e NT-proBNP podem estar elevados na miocardite. No entanto, não são úteis para confirmação diagnóstica, uma vez que se elevam frente a diferentes causas de IC. Contudo, podem ser marcadores prognósticos. Em um estudo com miocardite confirmada por biópsia, entre vários biomarcadores avaliados, somente o NT-proBNP acima do último quartil (>4.245pg/mL) foi preditor de morte ou transplante cardíaco.
Sorologias virais. São de valor limitado no diagnóstico de miocardite, uma vez que anticorpos IgG de vírus cardiotrópicos são muito prevalentes na população geral na ausência de doença cardíaca viral. Em um estudo, não se observou correlação entre sorologia viral e achados da biópsia. Em situações específicas, podem ser úteis a sorologia para hepatite C, pesquisa de vírus HIV em indivíduos de alto risco e doença de Lyme em áreas endêmicas. A pesquisa de marcadores sorológicas deve ser ditada pela elevada suspeição clínica para aquela doença ( ). Marcadores imuno-histoquímicos e análise de genoma viral. São superiores aos critérios de Dallas e, portanto, úteis no diagnóstico etiológico. A taxa de complicações com a BEM é baixa ( ). -
O eletrocardiograma (ECG) é comumente solicitado para triagem de miocardite, mas com especificidade limitada, embora os pacientes frequentemente apresentem alguma alteração no ECG. Taquicardia sinusal pode ser a forma mais comum de apresentação do ECG. Algumas alterações no ECG são mais sugestivas de miocardite do que outras. Por exemplo, a elevação do segmento ST-T na miocardite é tipicamente côncava (em vez de convexa na isquemia miocárdica), difusa sem alterações recíprocas, transitória e reversível na evolução ( ). O padrão de repolarização precoce (RP) no ECG de alguns pacientes com miocardite aguda pode ser a evidência de inflamação/edema localizado no epicárdico do VE. Oka et al. mostraram que o padrão de RP no ECG da miocardite aguda foi transitório, reversível e não estava associado a um pior prognóstico. O BAV, na presença de dilatação leve do VE, pode ser devido a várias causas (incluindo laminopatia), mas também pode ser sugestivo de doença de Lyme, sarcoidose cardíaca ou miocardite de células gigantes. Ogunbayo identificou que, em 31.760 pacientes com diagnóstico primário de miocardite, o bloqueio cardíaco foi relatado em 540 (1,7%), sendo 21,6% com BAV de primeiro grau, 11,2% com BAV de segundo grau e 67,2% com BAV de alto grau. O BAV de alto grau apresentou associação independente com o aumento da morbimortalidade. Recente metanálise mostrou que o alargamento de QRS esteve presente como característica precoce da miocardite fulminante. Em um estudo em que pacientes internados agudamente com miocardite sem IC prévia foram submetidos à BEM, o alargamento do QRS foi preditor independente de morte cardíaca ou transplante cardíaco. Uma proporção significativa de pacientes com miocardite aguda apresenta morte súbita cardíaca, presumivelmente por arritmia cardíaca. Estudo recente de Adegbala mostrou um total de 32.107 internações por miocardite aguda entre 2007 e 2014, nos EUA, das quais 10.844 (33,71%) apresentaram arritmias, sendo as mais comuns taquicardia ventricular (22,3%) e fibrilação atrial (26,9%), e a presença dessas arritmias teve impacto na mortalidade. Resumidamente, o ECG fornece uma ferramenta conveniente para a estratificação de risco e a triagem inicial, mas com valor diagnóstico fraco. 4.4.1. Critério de diagnóstico por eletrocardiograma/ Holter /testes de estresse O ECG de 12 derivações é prática usual na investigação diagnóstica e na avaliação prognóstica da miocardite ( ). As alterações mais frequentemente associadas com a miocardite no ECG de 12 derivações e/ou Holter e/ou testes de estresse, com qualquer um dos seguintes: bloqueio atrioventricular de I a III graus ou bloqueio de ramo, alteração de ST/T (elevação de ST ou sem elevação do segmento ST, inversão da onda T), parada sinusal, taquicardia ou fibrilação ventriculares e assistolia, fibrilação atrial, redução da altura da onda R, atraso da condução intraventricular (complexo QRS alargado), ondas Q anormais, baixa voltagem, batimentos prematuros frequentes, taquicardia supraventricular. 4.4.2. Prognóstico Alargamento do QRS, BAV de alto grau, taquicardia ventricular e fibrilação atrial aumentaram a mortalidade.
Holter /testes de estresse O ECG de 12 derivações é prática usual na investigação diagnóstica e na avaliação prognóstica da miocardite ( ). As alterações mais frequentemente associadas com a miocardite no ECG de 12 derivações e/ou Holter e/ou testes de estresse, com qualquer um dos seguintes: bloqueio atrioventricular de I a III graus ou bloqueio de ramo, alteração de ST/T (elevação de ST ou sem elevação do segmento ST, inversão da onda T), parada sinusal, taquicardia ou fibrilação ventriculares e assistolia, fibrilação atrial, redução da altura da onda R, atraso da condução intraventricular (complexo QRS alargado), ondas Q anormais, baixa voltagem, batimentos prematuros frequentes, taquicardia supraventricular.
Alargamento do QRS, BAV de alto grau, taquicardia ventricular e fibrilação atrial aumentaram a mortalidade.
O ecocardiograma tem um papel limitado no diagnóstico da miocardite propriamente dita. Trata-se de uma ferramenta muito importante na exclusão de outras patologias, devendo sempre ser realizada quando ocorre a suspeita clínica ( ). , Não existe um achado ecocardiográfico específico, e as alterações encontradas apenas vão espelhar um quadro inflamatório miocárdico. Portanto, podemos evidenciar desde alterações segmentares (diagnóstico diferencial com as cardiopatias isquêmicas) até alterações difusas (hipocinesia global de um ou ambos os ventrículos). , Quando o acometimento é agudo e grave, as cavidades ventriculares são pequenas (não dilatadas) e podemos evidenciar a presença de edema miocárdico (aumento da espessura parietal), bem como derrame pericárdico, achados esses comuns na miocardite fulminante. O acometimento do ventrículo direito (VD) geralmente reflete um prognóstico mais reservado. Um papel interessante do ecocardiograma é como adjunto na realização da BEM, visando não só ao sítio ideal para a retirada dos fragmentos, mas também guiando o intervencionista e evitando complicações ( ).
Na avaliação dos pacientes com miocardite, assim como na avaliação de outras cardiomiopatias não isquêmicas, a ressonância magnética cardíaca (RMC) apresenta grande utilidade na determinação dos parâmetros morfológicos e funcionais ventriculares. De fato, já foi amplamente validada para quantificar os volumes, a massa e a função tanto do VE quanto do VD, e é considerada, atualmente, a modalidade diagnóstica padrão-ouro para essa avaliação. Dada a sua alta resolução espacial e temporal, e devido a sua natureza tridimencional, que a torna independente de premissas geométricas, a RMC apresenta excelente acurácia e reprodutibilidade características especialmente úteis ao acompanhamento longitudinal dos pacientes. Entretanto, o maior valor da RMC na avaliação dos pacientes com suspeita ou diagnóstico confirmado de miocardite consiste na sua capacidade de proporcionar detalhada caracterização tecidual. Dessa maneira, permite identificar tanto a lesão miocárdica inflamatória das fases aguda e subaguda quanto as lesões cicatriciais frequentemente presentes na fase crônica da doença. As principais técnicas de RMC classicamente utilizadas na caracterização da lesão miocárdica dos pacientes com miocardite são as sequências ponderadas em T2 (“ T2 imaging ”) e a técnica do realce tardio. - Nas imagens adquiridas pelas sequências ponderadas em T2, quanto maior for o conteúdo líquido de um determinado tecido, maior será sua intensidade de sinal. Portanto, essa técnica permite avaliar o edema miocárdico secundário ao processo inflamatório nos pacientes com miocardite aguda (“ edema imaging ”). - A técnica do realce tardio, por sua vez, permite identificar as regiões de necrose no caso das miocardites agudas ou subagudas, e as regiões de fibrose no caso das miocardites crônicas. , - Cabe ressaltar que o padrão de realce tardio da miocardite é muito diferente daquele observado nos casos de IAM. A principal diferença é que, no caso do infarto, o realce tardio sempre acomete o subendocárdio. O acometimento pode até ser transmural, mas a camada subendocárdica sempre está envolvida. No caso da miocardite, o realce tardio é mais frequentemente mesoepicárdico, na maior parte das vezes poupando o endocárdio. Além disso, enquanto, no infarto, as regiões de realce tardio tendem a ser únicas, homogêneas e distribuídas de acordo com os territórios coronarianos, no caso da miocardite, as regiões de realce costumam ser multifocais, heterogêneas e esparsas, não respeitado os territórios coronarianos. O Consenso de Lake Louise (CLL) original, publicado em 2009, se baseava em três técnicas de RMC. Além da técnica de imagem ponderada em T2 (“ edema imaging ”) e da técnica do realce tardio, ambas mencionadas anteriormente, incluía também a chamada técnica do realce miocárdico precoce. Esta última acabou por ser excluída na atualização dos critérios diagnósticos, após ficar demonstrado que não adicionava valor diagnóstico incremental às demais técnicas. Na prática, o realce miocárdico precoce já não vinha sendo utilizado clinicamente na maior parte dos centros de RMC do mundo. Recentemente, novas técnicas de RMC capazes de medir os tempos de relaxamento longitudinal (T1) e transversal (T2) do miocárdico foram introduzidas como métodos potencialmente sensíveis e específicos para a detecção de processo inflamatório miocárdico. Em geral, os valores de T1 ou T2 são medidos pixel a pixel e apresentados na forma de mapas paramétricos, os chamados mapas T1 e T2 do miocárdio. O mapa T1 pode ser adquirido antes do contraste (T1 nativo) e 15 a 20 minutos após contraste (momento de relativo equilíbrio da concentração de gadolínio), permitindo, assim, o cálculo do volume extracelular do miocárdio (VEC ou ECV [do inglês, extracellular volume ]). O mapa T2 é usualmente adquirido apenas antes da administração do contraste. A incorporação dos mapas T1 e T2 constituiu a motivação central para a recente atualização do CLL para o diagnóstico de miocardite pela RMC. De acordo com o novo consenso, esse diagnóstico se baseia na presença de dois critérios principais que podem estar ou não associados a critérios de suporte ( ). O primeiro critério diagnóstico principal tem por objetivo identificar a presença de edema miocárdico e se fundamenta na utilização de técnicas baseadas em T2: (1) técnica de imagem ponderada em T2 (“ edema imaging ”) e/ou (2) técnica de mapeamento T2. O segundo critério diagnóstico principal também permite detectar a presença de edema miocárdico, mas tem por objetivo primordial identificar a presença de necrose, fibrose e extravasamento capilar. Este segundo critério diagnóstico principal se fundamenta na utilização de técnicas baseadas em T1: (1) técnica do realce tardio e/ou (2) técnicas de mapeamento T1 (T1 nativo ou VEC). Os novos critérios para diagnóstico de miocardite, miopericardite ou perimiocardite e publicados em 2018 estão listados na . A acurácia da RMC na avaliação dos pacientes com suspeita de miocardite no primeiro CLL foi estimada em 78% (sensibilidade de 67% e especificidade de 91%). Essas estimativas foram posteriormente confirmadas em uma metanálise que demonstrou acurácia de 83%, com uma sensibilidade de 80% e especificidade de 87%. De modo similar, outra metanálise ainda mais recente demonstrou sensibilidade de 78% e especificidade de 88%, com uma área sob a curva (AUC) de 83%. Ainda não existem dados consistentes avaliando a acurácia da RMC utilizando os critérios diagnósticos propostos na segunda versão do CLL. Entretanto, um pequeno estudo recente que incluiu apenas 40 pacientes com miocardite aguda demonstrou sensibilidade de 88% e especificidade de 96% da RMC utilizando os novos critérios revisados (ver ). As recomendações para o uso da RMC na avaliação diagnóstica e prognóstica dos pacientes com suspeita de miocardite aguda estão sumarizadas na . , , , - Com base no conjunto das evidências científicas acumuladas desde a primeira versão desta diretriz da SBC, podemos, hoje, indicar uma posição da RMC mais estruturada na tomada de decisão de pacientes com suspeita de miocardite como proposto na estratificação de risco a seguir, na . , , Tal estratificação deve ser integrada à estratificação de risco ampla que inclui a apresentação clínica e outros exames complementares.
A medicina nuclear tem tido um papel crescente na avaliação do paciente com miocardite. Novos radiotraçadores e novos equipamentos têm traçado todo um novo espectro de contribuições para o manejo de pacientes com suspeita de doenças inflamatórias do miocárdio. As alterações fisiopatológicas dos diversos tipos de miocardite vão formar a base para o uso das técnicas de medicina nuclear: o processo inflamatório que leva à lesão do miocárdio é caracterizado por infiltração de linfócitos e macrófagos no miocárdio, pelo aumento da permeabilidade vascular e pelo consumo aumentado de glicose no sítio de inflamação e pela necrose celular com redução da perfusão tecidual em comparação com o miocárdio íntegro. Essas características vão se traduzir pela maior captação de citrato de Gálio-67 no miocárdio (especialmente útil nos casos de sarcoidose), pelo aumento do acúmulo de glicose marcada com flúor radioativo ( 18 F-FDG) e pela redução da perfusão miocárdica vista com traçadores com 99mTc-Sestamibi ou 201 Tálio. A lista os principais radiotraçadores utilizados na miocardite. 4.7.1. Radiotraçadores para Cintilografia por Emissão de Fóton Único (SPECT) O citrato de Gálio-67 é um traçador consagrado para pesquisa de infecção em medicina nuclear que se liga a células inflamatórias em sítios de aumento de permeabilidade vascular graças à sua característica ligação com as proteínas transportadoras do ferro como a lactoferina e nos lisossomos leucocitários. O Gálio-67 tem baixa sensibilidade (36%) para detecção de miocardite em pacientes com miocardiopatia dilatada de início recente e não deve ser empregado de rotina com essa indicação ( ). O único tipo de miocardite com alto rendimento positivo para a cintilografia com Gálio-67 é a decorrente da sarcoidose, em que os granulomas com células gigantes são especialmente ávidos para a retenção do radiotraçador. A presença de cintilografia com Gálio-67 positiva é considerada como um critério maior para o diagnóstico de sarcoidose cardíaca pelo consenso de especialistas da Heart Rhythm Society (HRS). Outro achado significativo observado em pacientes com sarcoidose cardíaca é a alteração de perfusão decorrente da presença de constrição microvascular miocárdica nos vasos circunjacentes aos granulomas. O defeito de perfusão observado na cintilografia em repouso pode desaparecer na imagem de estresse, um padrão denominado redistribuição reversa que pode ser associado à sarcoidose. A cintilografia com Gálio-67 pode ser empregada como alternativa a pacientes sem acesso ou que tenham contraindicação à realização de RMcom gadolínio (claustrofobia, alergia ao contraste, insuficiência renal) e pode contribuir em casos suspeitos de miocardite por critérios clínicos (febre, história recente de infecção respiratória ou intestinal, elevação de marcadores de necrose), sendo útil também no diagnóstico diferencial entre IAM com coronárias normais e miocardite, conforme o estudo de Hung et al., em que a técnica se mostrou positiva quando realizada precocemente após o surgimento de sintomas. Alguns casos de miocardite podem apresentar agressão regional no miocárdio e ser a etiologia de arritmias, onde os estudos com Gálio-67 podem demonstrar acúmulo focal em áreas dos ventrículos e até mesmo dos átrios isoladamente. 4.7.2. Radiotraçadores para Tomografia por Emissão de Pósitrons (SPECT) O 18 F-FDG é captado pelas células inflamatórias como transporte ativo de modo independente da ação da insulina. Dessa maneira, quando é realizada uma adequada supressão da captação de glicose pelo miocárdio, o PET com 18 F-FDG se transforma em uma sensível ferramenta para diagnóstico de inflamação miocárdica e para acompanhamento da mesma em resposta ao tratamento ( ). O maior número de estudos do uso do PET com 18 F-FDG na miocardite está concentrado na sarcoidose cardíaca, em que recente metanálise demonstrou sensibilidade de 84% e especificidade de 83%. Para que o PET com 18 F-FDG seja útil na sarcoidose ou em outras afecções inflamatórias cardíacas como miocardite, endocardite infecciosa ou na rejeição após transplante, é crucial o adequado preparo do paciente para evitar que insulina circulante leve a acúmulo não inflamatório de 18 F-FDG no miocárdio. Entre os diversos esquemas de preparo indicados, o uso do jejum prolongado de 12 horas a 18 horas antes da injeção do radiotraçador é um dos mais aplicados, bem como a utilização de uma dieta rica em lipídios e proteínas, enquanto o uso da heparina não é consensual. , O marco diagnóstico de atividade inflamatória é a captação focal do 18 F-FDG no miocárdio, enquanto há significado prognóstico a presença de captação de 18 F-FDG no VD e a presença de captação inflamatória em áreas de hipoperfusão, as denominadas áreas de discordância ( mismatch ): metabolismo aumentado com perfusão reduzida. A utilização do PET com 18 F-FDG também é empregada para acompanhamento da resposta ao tratamento na sarcoidose cardíaca e para avaliação da atividade da doença extracardíaca. Um algoritmo proposto de acompanhamento é o da , adaptado de Young et al. A miocardite não associada à sarcoidose tem como técnica diagnóstica padrão a RMC. O aumento da intensidade de sinal das imagens pesadas em T2 (edema), o aumento do realce precoce de gadolínio (hiperemia) e a impregnação tardia de gadolínio no miocárdio (realce tardio para necrose) têm, combinados, sensibilidade de 67% e especificidade de 91% para o diagnóstico de miocardite. Entretanto, em muitos casos, há limitações para uso adequado da técnica como baixa qualidade do sinal nas imagens em T2, artefatos e impossibilidade do uso do contraste gadolínio. Nesses casos, o uso do PET 18 F- FDG pode ser bastante útil na complementação da investigação diagnóstica, seja em equipamentos de PET-CT ou mais modernamente em equipamentos de PET-RM, que associam ao PET um equipamento de RM. Estudos com PET-RM têm demonstrado que o PET é superior à RM na identificação de áreas de inflamação cardíaca em atividade. O PET-CT com 18 F-FDG tem sido utilizado em condições como lúpus eritematoso sistêmico, miocardite de células gigantes, esclerodermia e até mesmo na cardite reumática, como técnica para identificação de inflamação em atividade com sucesso. Outro uso recente do PET com 18 F-FDG que vem crescendo é na investigação da etiologia de arritmias: sarcoidose cardíaca e miocardite crônica, incluindo doença de Chagas, como causa de arritmias ventriculares, bem como na investigação de distúrbios de condução, especialmente indivíduos com menos de 50 anos de idade e bloqueio atrioventricular em que o PET tem identificado diversos casos de sarcoidose e mesmo de tuberculose cardíaca como causa do distúrbio de condução. No estudo de Tung et al., 50% dos pacientes com miocardiopatia e arritmias ventriculares inexplicáveis tiveram o PET com 18 F-FDG positivo, indicando a presença de miocardite não suspeita por outras técnicas. 4.7.3. Perspectivas Adicionais Novos radiotraçadores têm sido avaliados em pacientes com inflamação miocárdica, como é o caso do 68Gálio-dotatate, que tem afinidade pelos receptores de somatostatina que estão expressos em células inflamatórias. Outro radiotraçador que tem sido analisado é o 123I-MIBG, que avalia o estado da inervação adrenérgica pré- sináptica cardíaca. Apesar de o radiotraçador não identificar de modo direto o estado inflamatório, ele tem relação importante com o risco aumentado de arritmias ventriculares, em especial, em pacientes com miocardite crônica chagásica, demonstrando as áreas de miocárdio viável que são denervadas e, por isso, mais vulneráveis à taquicardia ventricular sustentada.
O citrato de Gálio-67 é um traçador consagrado para pesquisa de infecção em medicina nuclear que se liga a células inflamatórias em sítios de aumento de permeabilidade vascular graças à sua característica ligação com as proteínas transportadoras do ferro como a lactoferina e nos lisossomos leucocitários. O Gálio-67 tem baixa sensibilidade (36%) para detecção de miocardite em pacientes com miocardiopatia dilatada de início recente e não deve ser empregado de rotina com essa indicação ( ). O único tipo de miocardite com alto rendimento positivo para a cintilografia com Gálio-67 é a decorrente da sarcoidose, em que os granulomas com células gigantes são especialmente ávidos para a retenção do radiotraçador. A presença de cintilografia com Gálio-67 positiva é considerada como um critério maior para o diagnóstico de sarcoidose cardíaca pelo consenso de especialistas da Heart Rhythm Society (HRS). Outro achado significativo observado em pacientes com sarcoidose cardíaca é a alteração de perfusão decorrente da presença de constrição microvascular miocárdica nos vasos circunjacentes aos granulomas. O defeito de perfusão observado na cintilografia em repouso pode desaparecer na imagem de estresse, um padrão denominado redistribuição reversa que pode ser associado à sarcoidose. A cintilografia com Gálio-67 pode ser empregada como alternativa a pacientes sem acesso ou que tenham contraindicação à realização de RMcom gadolínio (claustrofobia, alergia ao contraste, insuficiência renal) e pode contribuir em casos suspeitos de miocardite por critérios clínicos (febre, história recente de infecção respiratória ou intestinal, elevação de marcadores de necrose), sendo útil também no diagnóstico diferencial entre IAM com coronárias normais e miocardite, conforme o estudo de Hung et al., em que a técnica se mostrou positiva quando realizada precocemente após o surgimento de sintomas. Alguns casos de miocardite podem apresentar agressão regional no miocárdio e ser a etiologia de arritmias, onde os estudos com Gálio-67 podem demonstrar acúmulo focal em áreas dos ventrículos e até mesmo dos átrios isoladamente.
O 18 F-FDG é captado pelas células inflamatórias como transporte ativo de modo independente da ação da insulina. Dessa maneira, quando é realizada uma adequada supressão da captação de glicose pelo miocárdio, o PET com 18 F-FDG se transforma em uma sensível ferramenta para diagnóstico de inflamação miocárdica e para acompanhamento da mesma em resposta ao tratamento ( ). O maior número de estudos do uso do PET com 18 F-FDG na miocardite está concentrado na sarcoidose cardíaca, em que recente metanálise demonstrou sensibilidade de 84% e especificidade de 83%. Para que o PET com 18 F-FDG seja útil na sarcoidose ou em outras afecções inflamatórias cardíacas como miocardite, endocardite infecciosa ou na rejeição após transplante, é crucial o adequado preparo do paciente para evitar que insulina circulante leve a acúmulo não inflamatório de 18 F-FDG no miocárdio. Entre os diversos esquemas de preparo indicados, o uso do jejum prolongado de 12 horas a 18 horas antes da injeção do radiotraçador é um dos mais aplicados, bem como a utilização de uma dieta rica em lipídios e proteínas, enquanto o uso da heparina não é consensual. , O marco diagnóstico de atividade inflamatória é a captação focal do 18 F-FDG no miocárdio, enquanto há significado prognóstico a presença de captação de 18 F-FDG no VD e a presença de captação inflamatória em áreas de hipoperfusão, as denominadas áreas de discordância ( mismatch ): metabolismo aumentado com perfusão reduzida. A utilização do PET com 18 F-FDG também é empregada para acompanhamento da resposta ao tratamento na sarcoidose cardíaca e para avaliação da atividade da doença extracardíaca. Um algoritmo proposto de acompanhamento é o da , adaptado de Young et al. A miocardite não associada à sarcoidose tem como técnica diagnóstica padrão a RMC. O aumento da intensidade de sinal das imagens pesadas em T2 (edema), o aumento do realce precoce de gadolínio (hiperemia) e a impregnação tardia de gadolínio no miocárdio (realce tardio para necrose) têm, combinados, sensibilidade de 67% e especificidade de 91% para o diagnóstico de miocardite. Entretanto, em muitos casos, há limitações para uso adequado da técnica como baixa qualidade do sinal nas imagens em T2, artefatos e impossibilidade do uso do contraste gadolínio. Nesses casos, o uso do PET 18 F- FDG pode ser bastante útil na complementação da investigação diagnóstica, seja em equipamentos de PET-CT ou mais modernamente em equipamentos de PET-RM, que associam ao PET um equipamento de RM. Estudos com PET-RM têm demonstrado que o PET é superior à RM na identificação de áreas de inflamação cardíaca em atividade. O PET-CT com 18 F-FDG tem sido utilizado em condições como lúpus eritematoso sistêmico, miocardite de células gigantes, esclerodermia e até mesmo na cardite reumática, como técnica para identificação de inflamação em atividade com sucesso. Outro uso recente do PET com 18 F-FDG que vem crescendo é na investigação da etiologia de arritmias: sarcoidose cardíaca e miocardite crônica, incluindo doença de Chagas, como causa de arritmias ventriculares, bem como na investigação de distúrbios de condução, especialmente indivíduos com menos de 50 anos de idade e bloqueio atrioventricular em que o PET tem identificado diversos casos de sarcoidose e mesmo de tuberculose cardíaca como causa do distúrbio de condução. No estudo de Tung et al., 50% dos pacientes com miocardiopatia e arritmias ventriculares inexplicáveis tiveram o PET com 18 F-FDG positivo, indicando a presença de miocardite não suspeita por outras técnicas.
Novos radiotraçadores têm sido avaliados em pacientes com inflamação miocárdica, como é o caso do 68Gálio-dotatate, que tem afinidade pelos receptores de somatostatina que estão expressos em células inflamatórias. Outro radiotraçador que tem sido analisado é o 123I-MIBG, que avalia o estado da inervação adrenérgica pré- sináptica cardíaca. Apesar de o radiotraçador não identificar de modo direto o estado inflamatório, ele tem relação importante com o risco aumentado de arritmias ventriculares, em especial, em pacientes com miocardite crônica chagásica, demonstrando as áreas de miocárdio viável que são denervadas e, por isso, mais vulneráveis à taquicardia ventricular sustentada.
A miocardite aguda pode mimetizar IAM com dor torácica típica, anormalidades no ECG similares ao IAM com ou sem supradesnivelamento do segmento ST, elevação das enzimas cardíacas e instabilidade hemodinâmica. Na suspeita de miocardite com apresentação parecida com um infarto, é necessário excluir DAC por coronariografia percutânea ou angiotomografia de coronárias. A cinecoronariografia de rotina também deve ser realizada durante a investigação de uma nova cardiomiopatia dilatada. A análise de 46 publicações avaliando a fisiopatologia subjacente de IAM com artérias coronárias não obstrutivas (MINOCA) revelou um infarto típico na RMC em apenas 24% dos pacientes, miocardite em 33% e sem anomalia significativa em 26%. A idade jovem e a PCR estavam associadas à miocardite, enquanto sexo masculino, hiperlipidemia tratada, alta razão de troponina e baixa PCR estavam associados ao IAM verdadeiro. Como pacientes com miocardite aguda que imitam o IAM com supradesniveamento do segmento ST têm um prognóstico favorável, é importante estabelecer o diagnóstico correto para evitar tratamentos desnecessários e potencialmente perigosos. A angiotomografia computadorizada de coronárias (angio-TC) é um exame simples e rápido, e fornece uma avaliação abrangente das características das artérias coronárias e do tecido miocárdico. Na prática, a aquisição da angio-TC em primeira passagem permite a avaliação da anatomia coronariana e do realce do ventrículo esquerdo. A aquisição tardia de angio-TC é realizada 3 a 5 minutos mais tarde, sem necessidade de reinjeção do meio de contraste, permitindo a captação de iodo em imagens tardias com contraste realçados de maneira semelhante à RM do coração. , A angio-TC e a RM do coração têm maneiras próprias e exclusivas de evitar uma angiografia coronariana invasiva, para excluir DAC (significativa) e para detectar outras doenças, como dissecção aguda da aorta, embolia pulmonar, miocardite ou cardiomiopatia de estresse. A grande disponibilidade da angio-TC, combinada com a possibilidade de descartar síndrome coronariana aguda (SCA) com angiografia coronariana durante o mesmo exame, torna-a promissora no refinamento das imagens de miocardite aguda ( ). Em crianças com suspeita de miocardite e doença de Kawasaki, a angiotomografia computadorizada pode ser usada na avaliação das anormalidades nas artérias coronarianas. A última diretriz da European Society of Cardiology (ESC) sugere que, na ausência de doença arterial coronariana angiograficamente significativa (estenose ≥50%) ou condições preexistentes que poderiam explicar o cenário clínico, pacientes que têm pelo menos uma das cinco apresentações clínicas (dor torácica aguda; IC aguda ou com piora com ≤3 meses de dispneia, fadiga e/ou sinais de IC; IC crônica com >3 meses de dispneia, fadiga e/ou sinais de IC; palpitações, sintomas de arritmias inexplicáveis e/ou síncope e/ou morte abortada; choque cardiogênico inexplicável) e/ou certos testes diagnósticos de suporte (ECG, Holter , troponina, anormalidades de função ventricular e edema e/ou realce tardio do gadolínio com padrão miocárdico clássico) devem ser considerados como tendo “suspeita clínica de miocardite” e, assim, justificar uma avaliação adicional. ,
A análise histopatológica do tecido do miocárdio é ferramenta importante para diagnóstico e prognóstico nos pacientes com miocardite. A biópsia endomiocárdica (BEM) utilizando critérios histopatológicos padronizados (critérios de Dallas) e imuno-histoquímicos é o atual padrão-ouro para diagnóstico de miocardite. Os critérios de Dallas, isoladamente, apresentam limitações, em virtude do alto grau de variabilidade interobservador na interpretação patológica e detecção de processos inflamatórios não celulares, diagnosticando em torno 10% a 20% dos pacientes. Assim, de acordo com a definição da OMS, a imuno-histoquímica com o uso de painel de anticorpos monoclonais e policlonais é mandatória para diferenciar os componentes inflamatórios presentes. , A análise genômica viral no miocárdio doente, quando acoplada com as análises imuno-histoquímicas, melhorou a precisão e a utilidade diagnóstica e prognóstica da BEM. É recomendada a triagem viral: enterovírus, influenza, adenovírus, citomegalovírus, vírus Epstein-Barr, parvovírus B19, herpes-vírus humano. No entanto, como alguns genomas virais (p. ex., PVB19) podem ser detectados em corações normais e em doenças cardíacas isquêmicas e valvares, pode ser necessário o uso complementar de mRNA específicos de DNA virais para definir infecção ativa. 4.9.1. Ponderações para Indicação A BEM realizada precocemente na apresentação clínica grave auxilia no diferencial diagnóstico de tipos específicos de miocardite (células gigantes, alérgica, eosinofílica, sarcoidose) que implicam diferentes tratamentos (p. ex., imunossupressores) e prognóstico ( ). Além disso, fornece diagnóstico diferencial de doenças que podem simular miocardite (cardiomiopatia arritmogênica do ventrículo direito, cardiomiopatia de Takotsubo, cardiomiopatia periparto, distúrbios inflamatórios/de armazenamento). Atualmente, a principal indicação para BEM ocorre em pacientes com IC de início recente (menos de 2 semanas), acompanhada de apresentação clínica grave (instabilidade hemodinâmica, uso de suporte circulatório mecânico ou inotrópico, refratariedade ao tratamento clínico) ou arritmias de alto risco (arritmias ventriculares sustentada ou sintomática ou bloqueios cardíacos de alto grau) ( ). , No entanto, sabe-se as recomendações antecedentes foram baseadas notadamente nos critérios de Dallas, nos quais diagnóstico, valor prognóstico e terapêutico é limitada. Com o uso da análise imuno-histoquímica e genômica viral, cresce a tendência de uma aplicação mais liberal da BEM na suspeita de miocardite clinicamente independente do padrão e gravidade da apresentação. Por outro lado, o valor de BEM é questionável em pacientes que apresentam síndromes de baixo risco e respondem a tratamento padrão sem perspectiva de implicação terapêutica ou prognóstica. Finalmente, no cenário de síndromes de risco intermediário, a BEM deve ser considerada no caso de manutenção ou agravamento dos sintomas, disfunção ventricular, arritmias, distúrbios de condução ( ). 4.9.2. Prognóstico Enquanto os critérios de Dallas não são um preditor preciso de resultados clínicos, as evidências imuno-histológicas de inflamação miocárdica estão associadas a um risco aumentado de morte cardiovascular e necessidade de transplante cardíaco. Na miocardite por células gigantes, a gravidade da necrose e fibrose está associada a um risco aumentado de morte e transplante. A ausência ou presença de genomas enterovirais residuais em amostras repetidas correlacionou-se com a progressão para a cardiomiopatia em estágio terminal, enquanto depuração viral espontânea foi associada à melhora da função sistólica. 4.9.3. Técnica O procedimento deve ser realizado no laboratório de hemodinâmica, por hemodinamicista com experiência na realização desse procedimento. A anestesia é local com sedação consciente, se necessário, sempre sob a supervisão do anestesiologista. A BEM pode ser realizada de maneira segura, guiada por fluoroscopia direta, e deve ter auxílio do ecocardiograma na sua realização que servirá de guia para o posicionamento correto do biótomo para que se evite puncionar a parede livre do VD. A RMC é particularmente útil para facilitar uma abordagem guiada, em virtude de sua utilidade na distinção entre miocárdio normal e doente, e tem sido avaliada para aumentar valores preditivos. Não existem estudos comparativos para que se recomende a biópsia endocárdica do VD ou do VE; entretanto, a realização da BEM do VE deve ser criteriosamente analisada em casos de doença restrita ou predominante em VE. As amostras devem ser obtidas no ventrículo direito, especialmente a porção distal do septo interventricular e a área trabeculada apical, evitando-se a parede livre do VD. O número de amostras dependerá da pesquisa a ser realizada. No caso de investigação de miocardite viral, devem ser 10 amostras (6 para pesquisa viral, 2 para hematoxilina-eosina e 2 para imuno-histoquímica). No caso de investigação de doenças infiltrativas ou de depósito, 6 fragmentos (2 para hematoxilina-eosina, 2 para imuno-histoquímica e 2 para microscopia eletrônica). As amostras para HE e imuno-histoquímica devem ser colocadas em frasco de formalina tamponada a 10% e não devem ser refrigeradas. As amostras para pesquisa viral devem ser colocadas em microtubos tipo Eppendorf® (sem soluções de transporte), e estes em recipientes com gelo seco, e rapidamente transferidas para refrigeradores –70 graus para armazenamento. As amostras para microscopia eletrônica devem ser acondicionadas em tubos Eppendorf® com solução oct. A BEM pode ser repetida, se necessário, para monitorar a resposta à terapia dirigida à etiologia ou se houver suspeita de erro de amostragem em um paciente com progressão inexplicada de IC. 4.9.4. Complicações Embora a BEM tradicional seja considerada um procedimento seguro, diferentes complicações foram relatadas. Quando realizada em centros experientes, sua principal taxa de complicações é <1%, o que é semelhante ao da angiografia coronariana. A utilização do ecocardiograma associado à fluoroscopia reduz de forma significativa a possibilidade de punção inadvertida que possa ocasionar perfuração miocárdica ou lesão de coronária. Podemos distinguir complicações relacionadas ao acesso vascular e inserção da bainha e complicações relacionadas à remoção de amostras. As complicações relacionadas ao acesso vascular são: punção arterial incidental; sangramento prolongado; hematoma e dissecção vascular. As comumente descritas são: reação vasovagal, BAV de graus variados, perfuração de parede livre de VD, pneumotórax, perfuração do septo interventricular, hematoma de sítio de punção, fístulas intracardíacas, hematoma retroperitoneal (acesso femoral), derrame pericárdico, deslocamento de trombos, tamponamento cardíaco, ruptura de cordoalhas tricúspides, arritmias ventriculares. Em resumo, o risco da BEM depende da condição clínica dos pacientes, da experiência do operador e de todas as ferramentas tecnológicas disponíveis para prevenir, diagnosticar e gerenciar complicações.
A BEM realizada precocemente na apresentação clínica grave auxilia no diferencial diagnóstico de tipos específicos de miocardite (células gigantes, alérgica, eosinofílica, sarcoidose) que implicam diferentes tratamentos (p. ex., imunossupressores) e prognóstico ( ). Além disso, fornece diagnóstico diferencial de doenças que podem simular miocardite (cardiomiopatia arritmogênica do ventrículo direito, cardiomiopatia de Takotsubo, cardiomiopatia periparto, distúrbios inflamatórios/de armazenamento). Atualmente, a principal indicação para BEM ocorre em pacientes com IC de início recente (menos de 2 semanas), acompanhada de apresentação clínica grave (instabilidade hemodinâmica, uso de suporte circulatório mecânico ou inotrópico, refratariedade ao tratamento clínico) ou arritmias de alto risco (arritmias ventriculares sustentada ou sintomática ou bloqueios cardíacos de alto grau) ( ). , No entanto, sabe-se as recomendações antecedentes foram baseadas notadamente nos critérios de Dallas, nos quais diagnóstico, valor prognóstico e terapêutico é limitada. Com o uso da análise imuno-histoquímica e genômica viral, cresce a tendência de uma aplicação mais liberal da BEM na suspeita de miocardite clinicamente independente do padrão e gravidade da apresentação. Por outro lado, o valor de BEM é questionável em pacientes que apresentam síndromes de baixo risco e respondem a tratamento padrão sem perspectiva de implicação terapêutica ou prognóstica. Finalmente, no cenário de síndromes de risco intermediário, a BEM deve ser considerada no caso de manutenção ou agravamento dos sintomas, disfunção ventricular, arritmias, distúrbios de condução ( ).
Enquanto os critérios de Dallas não são um preditor preciso de resultados clínicos, as evidências imuno-histológicas de inflamação miocárdica estão associadas a um risco aumentado de morte cardiovascular e necessidade de transplante cardíaco. Na miocardite por células gigantes, a gravidade da necrose e fibrose está associada a um risco aumentado de morte e transplante. A ausência ou presença de genomas enterovirais residuais em amostras repetidas correlacionou-se com a progressão para a cardiomiopatia em estágio terminal, enquanto depuração viral espontânea foi associada à melhora da função sistólica.
O procedimento deve ser realizado no laboratório de hemodinâmica, por hemodinamicista com experiência na realização desse procedimento. A anestesia é local com sedação consciente, se necessário, sempre sob a supervisão do anestesiologista. A BEM pode ser realizada de maneira segura, guiada por fluoroscopia direta, e deve ter auxílio do ecocardiograma na sua realização que servirá de guia para o posicionamento correto do biótomo para que se evite puncionar a parede livre do VD. A RMC é particularmente útil para facilitar uma abordagem guiada, em virtude de sua utilidade na distinção entre miocárdio normal e doente, e tem sido avaliada para aumentar valores preditivos. Não existem estudos comparativos para que se recomende a biópsia endocárdica do VD ou do VE; entretanto, a realização da BEM do VE deve ser criteriosamente analisada em casos de doença restrita ou predominante em VE. As amostras devem ser obtidas no ventrículo direito, especialmente a porção distal do septo interventricular e a área trabeculada apical, evitando-se a parede livre do VD. O número de amostras dependerá da pesquisa a ser realizada. No caso de investigação de miocardite viral, devem ser 10 amostras (6 para pesquisa viral, 2 para hematoxilina-eosina e 2 para imuno-histoquímica). No caso de investigação de doenças infiltrativas ou de depósito, 6 fragmentos (2 para hematoxilina-eosina, 2 para imuno-histoquímica e 2 para microscopia eletrônica). As amostras para HE e imuno-histoquímica devem ser colocadas em frasco de formalina tamponada a 10% e não devem ser refrigeradas. As amostras para pesquisa viral devem ser colocadas em microtubos tipo Eppendorf® (sem soluções de transporte), e estes em recipientes com gelo seco, e rapidamente transferidas para refrigeradores –70 graus para armazenamento. As amostras para microscopia eletrônica devem ser acondicionadas em tubos Eppendorf® com solução oct. A BEM pode ser repetida, se necessário, para monitorar a resposta à terapia dirigida à etiologia ou se houver suspeita de erro de amostragem em um paciente com progressão inexplicada de IC.
Embora a BEM tradicional seja considerada um procedimento seguro, diferentes complicações foram relatadas. Quando realizada em centros experientes, sua principal taxa de complicações é <1%, o que é semelhante ao da angiografia coronariana. A utilização do ecocardiograma associado à fluoroscopia reduz de forma significativa a possibilidade de punção inadvertida que possa ocasionar perfuração miocárdica ou lesão de coronária. Podemos distinguir complicações relacionadas ao acesso vascular e inserção da bainha e complicações relacionadas à remoção de amostras. As complicações relacionadas ao acesso vascular são: punção arterial incidental; sangramento prolongado; hematoma e dissecção vascular. As comumente descritas são: reação vasovagal, BAV de graus variados, perfuração de parede livre de VD, pneumotórax, perfuração do septo interventricular, hematoma de sítio de punção, fístulas intracardíacas, hematoma retroperitoneal (acesso femoral), derrame pericárdico, deslocamento de trombos, tamponamento cardíaco, ruptura de cordoalhas tricúspides, arritmias ventriculares. Em resumo, o risco da BEM depende da condição clínica dos pacientes, da experiência do operador e de todas as ferramentas tecnológicas disponíveis para prevenir, diagnosticar e gerenciar complicações.
4.10.1. Análise Histológica A miocardite é definida como uma doença inflamatória do miocárdio diagnosticada por critérios histológicos e imuno-histológicos. De acordo com os critérios de Dallas, a miocardite ativa é histologicamente definida como uma infiltração inflamatória do miocárdio com necrose de miócitos adjacentes, enquanto a miocardite limítrofe é diagnosticada quando o infiltrado inflamatório está presente, mas não é demonstrada lesão/necrose nas próprias células cardíacas. No entanto, os critérios de Dallas são considerados inadequados no diagnóstico de paciente com suspeita de miocardite clinicamente devido à sua limitação quanto à variabilidade na interpretação, falta de valor prognóstico e baixa sensibilidade em virtude de erro amostral. Essa limitação pode ser superada pelo envolvimento de manchas imuno-histológicas de células infiltrativas (leucócitos/linfócitos T/macrófagos) e antígenos de superfície (ICAM-1/HLA-DR). Além do diagnóstico da miocardite, a avaliação histopatológica dos critérios histológicos é essencial para alcançar uma classificação da miocardite nas formas linfocítica, eosinofílica, célula gigante, granulomatosa e/ou polimórfica, que geralmente refletem etiopatogênese diferente do processo inflamatório. Além disso, o exame histológico das seções de parafina por diferentes protocolos de coloração (HE, EvG, PAS, Azan) é usado para detectar morte celular do miocárdio, cicatrizes, fibrose, disfunções, alterações dos cardiomiócitos e condições vasculares patológicas. Amiloidose, depósitos de ferro, glicogênio e outras doenças de armazenamento podem ser excluídas ou especificadas por coloração adicional. 4.10.2. Análise Imuno-histoquímica A imuno-histoquímica aumentou significativamente a sensibilidade da BEM e fornece informações sobre o prognóstico clínico. A precisão diagnóstica da imuno- histologia para detecção de inflamação é maior que a dos critérios histológicos. A avaliação imuno-histoquímica é baseada na análise de reação específica antígeno-anticorpo. Um valor >14 leucócitos/mm 2 com presença de pelo menos linfócitos T >7 células/mm 2 foi considerado um corte realista para se chegar ao diagnóstico de miocardite. Quantificação de células infiltrativas adicionais, incluindo macrófagos (Mac- 1/CD69), células CD4+, CD8+, células citotóxicas (perforina) e quantificação do antígeno leucocitário humano (HLA-DR) e molécula intracelular de adesão celular- 1 (ICAM- 1) é obrigatória para caracterizar ainda mais as populações de células inflamatórias. Assim, a caracterização e a quantificação exata da inflamação do miocárdio é relevante para o prognóstico e para identificar diferentes marcadores de miocardite crônica/autoimune aguda, infecciosa, negativa por vírus (ver ). Outras manchas de imunofluorescência devem ser usadas para definir a rejeição humoral na BEM de transplante cardíaco, como C3d e C4d, ou para subtipagem de formas amiloides. 4.10.3. Análise do Perfil Genético Miocardite idiopática de células gigantes e sarcoidose cardíaca são distúrbios raros que causam IC aguda com choque cardiogênico e/ou arritmias ventriculares com risco de vida na ausência de outras etiologias e apresentam prognóstico extremamente ruim, com taxas de sobrevida em 4 anos inferiores a 20%. O principal problema para o diagnóstico correto é o erro de amostragem pelo exame histológico das BEM. Foram identificados perfis genéticos diferenciais distintos que permitiram uma clara discriminação entre os tecidos que abrigam células gigantes e aqueles com miocardite aguda ou controles livres de inflamação. Além disso, os perfis gênicos específicos da doença mudam durante o tratamento eficaz e podem ser aplicados no monitoramento da terapia. 4.10.4. Virologia Os genomas microbianos são determinados, quantificados e sequenciados usando métodos baseados em PCR, incluindo nested-PCR-RT e PCR quantitativo, determinando a análise da carga viral. A sequenciação do produto genético viral amplificado é obrigatória. Em especial, é possível analisar todos os vírus que podem ser responsáveis pela doença. Os genomas de vírus cardiotrópicos mais comuns relatados no miocárdio são parvovírus B19 (B19V), enterovírus (EV), adenovírus (ADV), vírus da gripe, vírus do herpes humano-6 (HHV-6), vírus de Epstein-Barr, citomegalovírus, vírus da hepatite C e vírus da imunodeficiência humana (HIV) ( ). O PVB19 é o vírus cardiotrópico predominante encontrado na miocardite. O impacto clínico no coração ainda está em discussão. O PVB19 cardiotrópico transcripcionalmente ativo com intermediários de replicação positiva nas BEM parece ser clinicamente relevante, porque os pacientes com miocardite caracterizados por PVB19 cardiotrópico transcricionalmente ativo com intermediários de replicação positivos têm uma expressão genética alterada em comparação aos pacientes com controle PVB19 latente. No entanto, a PCR pode resultar negativa, embora o organismo causal tenha origem viral, devido à depuração viral. Embora se pense que os vírus sejam a causa mais comum de miocardite, os títulos virais não são úteis no diagnóstico e tratamento.
A miocardite é definida como uma doença inflamatória do miocárdio diagnosticada por critérios histológicos e imuno-histológicos. De acordo com os critérios de Dallas, a miocardite ativa é histologicamente definida como uma infiltração inflamatória do miocárdio com necrose de miócitos adjacentes, enquanto a miocardite limítrofe é diagnosticada quando o infiltrado inflamatório está presente, mas não é demonstrada lesão/necrose nas próprias células cardíacas. No entanto, os critérios de Dallas são considerados inadequados no diagnóstico de paciente com suspeita de miocardite clinicamente devido à sua limitação quanto à variabilidade na interpretação, falta de valor prognóstico e baixa sensibilidade em virtude de erro amostral. Essa limitação pode ser superada pelo envolvimento de manchas imuno-histológicas de células infiltrativas (leucócitos/linfócitos T/macrófagos) e antígenos de superfície (ICAM-1/HLA-DR). Além do diagnóstico da miocardite, a avaliação histopatológica dos critérios histológicos é essencial para alcançar uma classificação da miocardite nas formas linfocítica, eosinofílica, célula gigante, granulomatosa e/ou polimórfica, que geralmente refletem etiopatogênese diferente do processo inflamatório. Além disso, o exame histológico das seções de parafina por diferentes protocolos de coloração (HE, EvG, PAS, Azan) é usado para detectar morte celular do miocárdio, cicatrizes, fibrose, disfunções, alterações dos cardiomiócitos e condições vasculares patológicas. Amiloidose, depósitos de ferro, glicogênio e outras doenças de armazenamento podem ser excluídas ou especificadas por coloração adicional.
A imuno-histoquímica aumentou significativamente a sensibilidade da BEM e fornece informações sobre o prognóstico clínico. A precisão diagnóstica da imuno- histologia para detecção de inflamação é maior que a dos critérios histológicos. A avaliação imuno-histoquímica é baseada na análise de reação específica antígeno-anticorpo. Um valor >14 leucócitos/mm 2 com presença de pelo menos linfócitos T >7 células/mm 2 foi considerado um corte realista para se chegar ao diagnóstico de miocardite. Quantificação de células infiltrativas adicionais, incluindo macrófagos (Mac- 1/CD69), células CD4+, CD8+, células citotóxicas (perforina) e quantificação do antígeno leucocitário humano (HLA-DR) e molécula intracelular de adesão celular- 1 (ICAM- 1) é obrigatória para caracterizar ainda mais as populações de células inflamatórias. Assim, a caracterização e a quantificação exata da inflamação do miocárdio é relevante para o prognóstico e para identificar diferentes marcadores de miocardite crônica/autoimune aguda, infecciosa, negativa por vírus (ver ). Outras manchas de imunofluorescência devem ser usadas para definir a rejeição humoral na BEM de transplante cardíaco, como C3d e C4d, ou para subtipagem de formas amiloides.
Miocardite idiopática de células gigantes e sarcoidose cardíaca são distúrbios raros que causam IC aguda com choque cardiogênico e/ou arritmias ventriculares com risco de vida na ausência de outras etiologias e apresentam prognóstico extremamente ruim, com taxas de sobrevida em 4 anos inferiores a 20%. O principal problema para o diagnóstico correto é o erro de amostragem pelo exame histológico das BEM. Foram identificados perfis genéticos diferenciais distintos que permitiram uma clara discriminação entre os tecidos que abrigam células gigantes e aqueles com miocardite aguda ou controles livres de inflamação. Além disso, os perfis gênicos específicos da doença mudam durante o tratamento eficaz e podem ser aplicados no monitoramento da terapia.
Os genomas microbianos são determinados, quantificados e sequenciados usando métodos baseados em PCR, incluindo nested-PCR-RT e PCR quantitativo, determinando a análise da carga viral. A sequenciação do produto genético viral amplificado é obrigatória. Em especial, é possível analisar todos os vírus que podem ser responsáveis pela doença. Os genomas de vírus cardiotrópicos mais comuns relatados no miocárdio são parvovírus B19 (B19V), enterovírus (EV), adenovírus (ADV), vírus da gripe, vírus do herpes humano-6 (HHV-6), vírus de Epstein-Barr, citomegalovírus, vírus da hepatite C e vírus da imunodeficiência humana (HIV) ( ). O PVB19 é o vírus cardiotrópico predominante encontrado na miocardite. O impacto clínico no coração ainda está em discussão. O PVB19 cardiotrópico transcripcionalmente ativo com intermediários de replicação positiva nas BEM parece ser clinicamente relevante, porque os pacientes com miocardite caracterizados por PVB19 cardiotrópico transcricionalmente ativo com intermediários de replicação positivos têm uma expressão genética alterada em comparação aos pacientes com controle PVB19 latente. No entanto, a PCR pode resultar negativa, embora o organismo causal tenha origem viral, devido à depuração viral. Embora se pense que os vírus sejam a causa mais comum de miocardite, os títulos virais não são úteis no diagnóstico e tratamento.
5.1. Fluxogramas Terapêuticos A maioria das miocardites apresenta um prognóstico favorável com regressão espontânea dos sintomas clínicos e função ventricular preservada sem necessidade de intervenção terapêutica. O fluxograma terapêutico da miocardite na maioria dos pacientes é guiado por meio da suspeita diagnóstica, uma vez que somente uma minoria dos pacientes será submetida à investigação por BEM ( ). Os pacientes com baixa suspeita diagnóstica de miocardite com apresentação clínica sem sinais de gravidade, função ventricular preservada e sem arritmias ventriculares apresentam evolução prognóstica favorável, sendo seguidos por acompanhamento clínico sem utilização de terapêutica medicamentosa. Nos pacientes com suspeita diagnóstica intermediária, com função ventricular preservada ou com disfunção ventricular com melhora evolutiva, a terapêutica é cardioprotetora com betabloqueadores, inibidores da enzima de conversão ou bloqueadores do receptor da angiotensina com objetivo de preservar ou melhorar a função ventricular. , Os pacientes com alta suspeita diagnóstica, que evoluem com algum dos indicadores de pior prognóstico, como piora clínica, instabilidade hemodinâmica, manutenção ou piora da disfunção ventricular, arritmias ventriculares frequentes e distúrbios de condução significativos, devem ser submetidos à BEM, com objetivo de pesquisa de inflamação e do agente etiológico, que oferecem a possibilidade de estabelecer terapêutica específica com imunossupressão, , imunomodulação - e antivirais, - que poderão trazer benefícios na melhora clínica, classe funcional, função ventricular e sobrevida , , - ( ). 5.2. Imunossupressão: Indicações e Tipos A terapêutica imunossupressora na miocardite tem como objetivo suprimir a resposta inflamatória e a atividade autoimune, visando como alvo à melhora clínica, da função ventricular, além de redução da mortalidade. Na miocardite linfocitária, apesar do racional fisiopatológico para utilização de imunossupressão, com base na presença de inflamação miocárdica por meio da BEM, associada à pesquisa de genoma viral negativa, as evidências corroborando seu uso são limitadas. Fatores como a regressão espontânea da inflamação, a falta de uniformidade dos estudos quanto aos critérios diagnósticos, o reduzido número de pacientes na maioria dos ensaios, a heterogeneidade das características clínicas das populações estudadas e a ausência de estudos com objetivo principal de avaliar a redução da mortalidade de forma isolada dificultam a análise dos benefícios clínicos da terapêutica imunossupressora na miocardite linfocitária ( ). , , , , - No estudo MTT, que incluiu pacientes com miocardite diagnosticada pelos critérios de Dallas associada à presença de disfunção ventricular, o uso de imunossupressão por 6 meses não demonstrou superioridade na melhora da função ventricular e de sobrevida em relação ao tratamento convencional, apesar de não ter realizado pesquisa de agentes infecciosos. O estudo italiano duplo-cego, randomizado, placebo-controlado TIMIC demonstrou melhora da função ventricular com imunossupressão em pacientes com miocardite à biópsia (acima de sete linfócitos por campo), mais de 6 meses de IC e ausência de genoma viral na BEM. Dessa forma, apesar de momento evolutivo diferente em relação à fase mais aguda da miocardite, esse estudo demonstrou o benefício da imunossupressão na ausência de genoma viral no miocárdio. No entanto, a não identificação de vírus específicos define que não estão presentes os vírus pesquisados, não afastando a possibilidade de que outros microrganismos poderiam estar presentes. Além disso, o achado qualitativo de microrganismos em BEM não estabelece uma relação causal indubitável com o desenvolvimento de miocardite/miocardiopatia, uma vez que podemos encontrar genoma viral em miocardiopatias de outras etiologias específicas e até mesmo em corações normais. , , Tomando como exemplo o parvovírus B19, cuja presença no tecido miocárdico pela PCR qualitativa é comum, outras técnicas documentando a baixa quantidade de cópias ou ausência de transcrição para RNA poderiam inferir a não correlação com desenvolvimento de miocardite/miocardiopatia, permitindo a consideração de imunossupressão, mesmo com o genoma desse vírus presente. No contexto da miocardite por doenças autoimunes, a utilização de imunossupressão é bem estabelecida, e, para cada entidade, diferentes estratégias podem ser consideradas, sendo a maioria envolvendo o uso de corticosteroide, geralmente combinado com outras drogas imunossupressoras ( ). - Devido à gravidade do quadro clínico, apesar da baixa incidência, o diagnóstico de miocardite de células gigantes não pode ser postergado, e seu tratamento envolve imunossupressão intensiva combinada. Trabalho clássico de Cooper et al. demonstrou aumento de sobrevida de 3 para 12 meses, quando comparado ao não uso de imunossupressão ou apenas corticosteroide em relação ao uso combinado de imunossupressão (corticosteroide e/ou azatioprina e/ou ciclosporina e/ou anticorpo antilinfócito). Casuística mais recente demonstrou sobrevida de 58% em 5 anos com uso combinado de corticosteroide, ciclosporina e azatioprina. Em casos refratários, existe a descrição do uso de anticorpo antilinfócito, micofenolato e sirolimo. A miocardite eosinofílica pode ser secundária à reação de hipersensibilidade a drogas, doenças autoimunes (granulomatose eosinofílica com poliangeíte ou síndrome de Churg-Strauss), síndrome hipereosinofílica, infecções e câncer, ou idiopática, sendo a imunossupressão também considerada nesse contexto, habitualmente utilizando corticosteroide. Revisão recente dos casos de literatura demonstram presença de eosinofilia periférica em 75% dos casos, uso de imunossupressão em 80%, e combinação em 20% (especialmente Churg-Strauss e hipereosinofílica, com elevada mortalidade em 30 dias (13% hipereosinofílica, 17% idiopática, 23% Churg-Strauss e 40% por hipersensibilidade). A terapêutica imunossupressora mais comumente utilizada nos pacientes com diagnóstico de miocardite envolve corticosteroide isoladamente ou em associação com azatioprina ( ), havendo o diagnóstico por BEM de inflamação com ausência de infecção viral como determinantes para a realização da imunossupressão ( ). Os pacientes submetidos à terapêutica imunossupressora devem ser clinicamente monitorados de modo contínuo quanto ao desenvolvimento de paraefeitos, pois estes podem aumentar de forma significativa tanto a morbidade quanto a mortalidade. 5.3. Antivirais: Indicações e Tipos O prognóstico da cardiomiopatia inflamatória/miocardite é afetado negativamente pela persistência do vírus. O curso da cardiomiopatia viral é para certos vírus intimamente associados ao curso espontâneo da infecção viral, pois a eliminação espontânea do vírus é acompanhada de melhora clínica, enquanto isso não se aplica a pacientes que desenvolvem persistência do vírus. - Pacientes com genomas enterovirais e adenovirais em CEM devem ser tratados com interferon beta (IFN-ß) (4 milhões de unidades por via subcutânea a cada 48 horas na primeira semana, 8 milhões de unidades por via subcutânea a cada 48 horas a partir da segunda semana e por 6 meses). Pode ser demonstrado em um estudo não randomizado que a administração de IFN-ß em pacientes positivos para EV e ADV induziu a eliminação do vírus, reduziu a lesão do miocárdio e melhorou significativamente a sobrevida a longo prazo. , Em um estudo de fase 2 seguinte – betaferon em um teste de cardiomiopatia viral crônica (BICC) – 143 pacientes com sintomas de IC e confirmação por biópsia dos genomas de EV, ADV e/ou B19V foram aleatoriamente designados para tratamento duplo-cego e receberam placebo ou IFN-ß por 24 semanas, além do tratamento padrão da IC. Em comparação ao placebo, a eliminação e/ou redução da carga viral foram maiores nos grupos IFN-ß. O tratamento com IFN-ß foi associado a efeitos favoráveis na melhora da classe funcional da NYHA, qualidade de vida e avaliação global do paciente. Em análises retrospectivas, foi possível demonstrar que o tratamento com IFN-ß foi significativamente menos eficaz na eliminação da infecção por B19V. O herpes-vírus humano 6 foi detectado em alta prevalência no tecido miocárdico de pacientes que apresentaram sintomas de IC em um cenário clinicamente suspeito de miocardite. Curiosamente, o HHV-6 é capaz de integrar seu genoma em telômeros de cromossomos humanos, o que permite a transmissão do HHV-6 através da linha germinativa. O HHV-6 integrado no cromossomo (ciHHV-6) parece estar associado a um risco aumentado de miocardite e pode levar a um agravamento da IC. O HHV-6 também não é eliminado pelo IFN-ß, mas os sintomas de reativação e insuficiência cardíaca do HHV-6 diminuem após um período de tratamento de 6 meses com ganciclovir seguido de valganciclovir (ganciclovir 1.000 mg/24h por via intravenosa por 5 dias, depois valganciclovir 900 mg/24h ou 1.800 mg/24h por 6 meses) em pacientes sintomáticos com ciHHV6 reativado (RNA mensageiro positivo). A infecção por B19V do músculo cardíaco ainda é motivo de discussão. Os primeiros dados forneceram evidências de que inibidores antivirais da transcriptase reversa e análogos de nucleosídios como a telbivudina podem melhorar o resultado clínico de pacientes com DNA positivo de B19V e intermediários replicativos. No entanto, agora é necessário agendar um grande estudo clínico randomizado, controlado por placebo, para avaliar os resultados. 5.4. Imunomodulação (Imunoglobulina – Imunoadsorção): Indicações e Tipos de Imunoglobulinas O racional para uso das imunoglobulinas intravenosas (IVIG) no tratamento das miocardites está na sua ampla capacidade de interagir com o sistema imune. São capazes de estimular o sistema complemento e células imunológicas a liberarem citocinas anti-inflamatórias e inibirem a liberação de citocinas pró-inflamatórias. As imunoglobulinas têm sido estudadas em diferentes cenários como na IC crônica; , na cardiomiopatia dilatada; , na cardiomiopatia periparto; na miocardite aguda, , , , na miocardite fulminante e nas miocardites virais. , Embora alguns desses estudos apontem para potencial benefício do uso de imunoglobulina, estudo randomizado controlado em pacientes adultos com cardiomiopatia dilatada de início recente (<6 meses) ou miocardite não demonstrou benefício do uso de imunoglobulina em relação à função ventricular quando comparado ao grupo controle. Houve melhora da função ventricular e até mesmo normalização em 36% dos casos ao longo do seguimento, independentemente do grupo de tratamento. Vale destacar que não foi realizada pesquisa viral na biópsia dos pacientes, e apenas 16% tinham quadro de miocardite comprovado por presença de inflamação na biópsia. Em pacientes com miocardite aguda, os primeiros estudos apontavam para melhora da função ventricular e tendência a melhor prognóstico em 1 ano, com o uso de altas doses de IVIG. No entanto, revisão sistemática realizada em 2005, envolvendo 62 estudos, encontrou apenas um estudo randomizado controlado sobre o tema, e não demonstrou benefício do uso da terapêutica em pacientes com miocardite aguda, concluindo serem insuficientes as evidências para recomendação rotineira do uso de IVIG nesse cenário. Mais recentemente, estudo multicêntrico randomizado pequeno (41 pacientes) avaliou o prognóstico em curto prazo de pacientes com miocardite aguda ou cardiomiopatia de início recente, submetidos ao tratamento com IVIG comparado com pacientes que não receberam IVIG e revelou melhor sobrevida em curto prazo entre os pacientes que receberam IVIG sem diferença em relação à melhora da função ventricular que ocorreu nos dois grupos. No entanto, houve redução significativa de citocinas inflamatórias no grupo tratado. Tal estudo levanta a hipótese de potencial benefício das imunoglobulinas e sugere o mecanismo pelo qual tal benefício pode ser observado; no entanto, devido ao pequeno número de pacientes, o estudo não é suficiente para embasar a recomendação da terapêutica de forma irrestrita para pacientes com miocardite aguda. No entanto, nas miocardites virais, há dados de literatura que demonstram benefício do uso de imunoglobulina. Em estudo piloto em pacientes com miocardite por parvovírus B19, o uso de IVIG gerou redução significativa da carga viral e melhorou a função cardíaca. Em outra análise incluindo 152 pacientes com miocardite por adenovírus ou parvovírus B19, o uso de imunoglobulina também mostrou melhora da capacidade para o exercício; melhora da fração de ejeção de VE e melhora da classe funcional. Houve redução significativa da inflamação nos dois grupos de pacientes e redução expressiva da carga viral apenas entre os pacientes com miocardite por adenovírus; pacientes com parvovirose apresentaram persistência viral em torno de 40%. Esses dados sugerem potencial benefício de uso de imunoglobulina em pacientes com miocardite viral com comprovação por BEM. Os dados atuais, embora não sejam suficientes para recomendação rotineira da terapêutica, apontam para potencial benefício da imunoglobulina em pacientes com miocardite com inflamação comprovada por biópsia, especialmente nas miocardites virais por adenovírus e parvovírus B19. 5.4.1. Imunoadsorção A patogênese da progressão para disfunção ventricular na cardiomiopatia dilatada envolve processos inflamatórios que podem ser identificados e quantificados por métodos imuno-histoquímicos, o que sugere relação causal entre miocardite e cardiomiopatia. A presença de linfócitos, células mononucleares e aumento da expressão gênica de antígenos HLA é frequente, assim como anticorpos contra proteínas mitocondriais e de contratilidade; receptores B1 e receptores muscarínicos também têm sido descritos na cardiomiopatia dilatada. - A extração desses anticorpos cardíacos é possível por imunoadsorção, e alguns estudos têm testado a eficácia dessa metodologia no tratamento de pacientes com cardiomiopatia dilatada/miocardite. , Em estudo controlado pequeno, 25 pacientes foram randomizados para realizar imunoadsorção seguida de substituição por IgG ou manter tratamento padrão sendo observada redução significativa de inflamação miocárdica (células CD3; linfócitos CD4 e CD8, além de reduzir a expressão antígenos HLA classe II) no grupo tratado. Em outros estudos pequenos randomizados, observa-se melhora da hemodinâmica e da função ventricular. Dados atuais sugerem que imunoadsorção pode ser uma abordagem terapêutica nova e promissora para pacientes com cardiomiopatia dilatada e presença de anticorpos cardíacos. Contudo, até o momento, as evidências baseiam-se em estudos pequenos não controlados ou estudos controlados abertos comparados à terapia convencional, que precisam ter seus resultados confirmados por grandes estudos multicêntricos prospectivos randomizados. No momento, está em andamento um estudo multicêntrico duplo-cego placebo-controlado que tem por objetivo avaliar os efeitos da imunoadsorção seguida de substituição por IgG em pacientes com cardiomiopatia dilatada. Apenas após os resultados deste grande estudo poderemos estabelecer grau de recomendação para essa terapêutica no contexto da cardiomiopatia dilatada/miocardite.
A maioria das miocardites apresenta um prognóstico favorável com regressão espontânea dos sintomas clínicos e função ventricular preservada sem necessidade de intervenção terapêutica. O fluxograma terapêutico da miocardite na maioria dos pacientes é guiado por meio da suspeita diagnóstica, uma vez que somente uma minoria dos pacientes será submetida à investigação por BEM ( ). Os pacientes com baixa suspeita diagnóstica de miocardite com apresentação clínica sem sinais de gravidade, função ventricular preservada e sem arritmias ventriculares apresentam evolução prognóstica favorável, sendo seguidos por acompanhamento clínico sem utilização de terapêutica medicamentosa. Nos pacientes com suspeita diagnóstica intermediária, com função ventricular preservada ou com disfunção ventricular com melhora evolutiva, a terapêutica é cardioprotetora com betabloqueadores, inibidores da enzima de conversão ou bloqueadores do receptor da angiotensina com objetivo de preservar ou melhorar a função ventricular. , Os pacientes com alta suspeita diagnóstica, que evoluem com algum dos indicadores de pior prognóstico, como piora clínica, instabilidade hemodinâmica, manutenção ou piora da disfunção ventricular, arritmias ventriculares frequentes e distúrbios de condução significativos, devem ser submetidos à BEM, com objetivo de pesquisa de inflamação e do agente etiológico, que oferecem a possibilidade de estabelecer terapêutica específica com imunossupressão, , imunomodulação - e antivirais, - que poderão trazer benefícios na melhora clínica, classe funcional, função ventricular e sobrevida , , - ( ).
A terapêutica imunossupressora na miocardite tem como objetivo suprimir a resposta inflamatória e a atividade autoimune, visando como alvo à melhora clínica, da função ventricular, além de redução da mortalidade. Na miocardite linfocitária, apesar do racional fisiopatológico para utilização de imunossupressão, com base na presença de inflamação miocárdica por meio da BEM, associada à pesquisa de genoma viral negativa, as evidências corroborando seu uso são limitadas. Fatores como a regressão espontânea da inflamação, a falta de uniformidade dos estudos quanto aos critérios diagnósticos, o reduzido número de pacientes na maioria dos ensaios, a heterogeneidade das características clínicas das populações estudadas e a ausência de estudos com objetivo principal de avaliar a redução da mortalidade de forma isolada dificultam a análise dos benefícios clínicos da terapêutica imunossupressora na miocardite linfocitária ( ). , , , , - No estudo MTT, que incluiu pacientes com miocardite diagnosticada pelos critérios de Dallas associada à presença de disfunção ventricular, o uso de imunossupressão por 6 meses não demonstrou superioridade na melhora da função ventricular e de sobrevida em relação ao tratamento convencional, apesar de não ter realizado pesquisa de agentes infecciosos. O estudo italiano duplo-cego, randomizado, placebo-controlado TIMIC demonstrou melhora da função ventricular com imunossupressão em pacientes com miocardite à biópsia (acima de sete linfócitos por campo), mais de 6 meses de IC e ausência de genoma viral na BEM. Dessa forma, apesar de momento evolutivo diferente em relação à fase mais aguda da miocardite, esse estudo demonstrou o benefício da imunossupressão na ausência de genoma viral no miocárdio. No entanto, a não identificação de vírus específicos define que não estão presentes os vírus pesquisados, não afastando a possibilidade de que outros microrganismos poderiam estar presentes. Além disso, o achado qualitativo de microrganismos em BEM não estabelece uma relação causal indubitável com o desenvolvimento de miocardite/miocardiopatia, uma vez que podemos encontrar genoma viral em miocardiopatias de outras etiologias específicas e até mesmo em corações normais. , , Tomando como exemplo o parvovírus B19, cuja presença no tecido miocárdico pela PCR qualitativa é comum, outras técnicas documentando a baixa quantidade de cópias ou ausência de transcrição para RNA poderiam inferir a não correlação com desenvolvimento de miocardite/miocardiopatia, permitindo a consideração de imunossupressão, mesmo com o genoma desse vírus presente. No contexto da miocardite por doenças autoimunes, a utilização de imunossupressão é bem estabelecida, e, para cada entidade, diferentes estratégias podem ser consideradas, sendo a maioria envolvendo o uso de corticosteroide, geralmente combinado com outras drogas imunossupressoras ( ). - Devido à gravidade do quadro clínico, apesar da baixa incidência, o diagnóstico de miocardite de células gigantes não pode ser postergado, e seu tratamento envolve imunossupressão intensiva combinada. Trabalho clássico de Cooper et al. demonstrou aumento de sobrevida de 3 para 12 meses, quando comparado ao não uso de imunossupressão ou apenas corticosteroide em relação ao uso combinado de imunossupressão (corticosteroide e/ou azatioprina e/ou ciclosporina e/ou anticorpo antilinfócito). Casuística mais recente demonstrou sobrevida de 58% em 5 anos com uso combinado de corticosteroide, ciclosporina e azatioprina. Em casos refratários, existe a descrição do uso de anticorpo antilinfócito, micofenolato e sirolimo. A miocardite eosinofílica pode ser secundária à reação de hipersensibilidade a drogas, doenças autoimunes (granulomatose eosinofílica com poliangeíte ou síndrome de Churg-Strauss), síndrome hipereosinofílica, infecções e câncer, ou idiopática, sendo a imunossupressão também considerada nesse contexto, habitualmente utilizando corticosteroide. Revisão recente dos casos de literatura demonstram presença de eosinofilia periférica em 75% dos casos, uso de imunossupressão em 80%, e combinação em 20% (especialmente Churg-Strauss e hipereosinofílica, com elevada mortalidade em 30 dias (13% hipereosinofílica, 17% idiopática, 23% Churg-Strauss e 40% por hipersensibilidade). A terapêutica imunossupressora mais comumente utilizada nos pacientes com diagnóstico de miocardite envolve corticosteroide isoladamente ou em associação com azatioprina ( ), havendo o diagnóstico por BEM de inflamação com ausência de infecção viral como determinantes para a realização da imunossupressão ( ). Os pacientes submetidos à terapêutica imunossupressora devem ser clinicamente monitorados de modo contínuo quanto ao desenvolvimento de paraefeitos, pois estes podem aumentar de forma significativa tanto a morbidade quanto a mortalidade.
O prognóstico da cardiomiopatia inflamatória/miocardite é afetado negativamente pela persistência do vírus. O curso da cardiomiopatia viral é para certos vírus intimamente associados ao curso espontâneo da infecção viral, pois a eliminação espontânea do vírus é acompanhada de melhora clínica, enquanto isso não se aplica a pacientes que desenvolvem persistência do vírus. - Pacientes com genomas enterovirais e adenovirais em CEM devem ser tratados com interferon beta (IFN-ß) (4 milhões de unidades por via subcutânea a cada 48 horas na primeira semana, 8 milhões de unidades por via subcutânea a cada 48 horas a partir da segunda semana e por 6 meses). Pode ser demonstrado em um estudo não randomizado que a administração de IFN-ß em pacientes positivos para EV e ADV induziu a eliminação do vírus, reduziu a lesão do miocárdio e melhorou significativamente a sobrevida a longo prazo. , Em um estudo de fase 2 seguinte – betaferon em um teste de cardiomiopatia viral crônica (BICC) – 143 pacientes com sintomas de IC e confirmação por biópsia dos genomas de EV, ADV e/ou B19V foram aleatoriamente designados para tratamento duplo-cego e receberam placebo ou IFN-ß por 24 semanas, além do tratamento padrão da IC. Em comparação ao placebo, a eliminação e/ou redução da carga viral foram maiores nos grupos IFN-ß. O tratamento com IFN-ß foi associado a efeitos favoráveis na melhora da classe funcional da NYHA, qualidade de vida e avaliação global do paciente. Em análises retrospectivas, foi possível demonstrar que o tratamento com IFN-ß foi significativamente menos eficaz na eliminação da infecção por B19V. O herpes-vírus humano 6 foi detectado em alta prevalência no tecido miocárdico de pacientes que apresentaram sintomas de IC em um cenário clinicamente suspeito de miocardite. Curiosamente, o HHV-6 é capaz de integrar seu genoma em telômeros de cromossomos humanos, o que permite a transmissão do HHV-6 através da linha germinativa. O HHV-6 integrado no cromossomo (ciHHV-6) parece estar associado a um risco aumentado de miocardite e pode levar a um agravamento da IC. O HHV-6 também não é eliminado pelo IFN-ß, mas os sintomas de reativação e insuficiência cardíaca do HHV-6 diminuem após um período de tratamento de 6 meses com ganciclovir seguido de valganciclovir (ganciclovir 1.000 mg/24h por via intravenosa por 5 dias, depois valganciclovir 900 mg/24h ou 1.800 mg/24h por 6 meses) em pacientes sintomáticos com ciHHV6 reativado (RNA mensageiro positivo). A infecção por B19V do músculo cardíaco ainda é motivo de discussão. Os primeiros dados forneceram evidências de que inibidores antivirais da transcriptase reversa e análogos de nucleosídios como a telbivudina podem melhorar o resultado clínico de pacientes com DNA positivo de B19V e intermediários replicativos. No entanto, agora é necessário agendar um grande estudo clínico randomizado, controlado por placebo, para avaliar os resultados.
O racional para uso das imunoglobulinas intravenosas (IVIG) no tratamento das miocardites está na sua ampla capacidade de interagir com o sistema imune. São capazes de estimular o sistema complemento e células imunológicas a liberarem citocinas anti-inflamatórias e inibirem a liberação de citocinas pró-inflamatórias. As imunoglobulinas têm sido estudadas em diferentes cenários como na IC crônica; , na cardiomiopatia dilatada; , na cardiomiopatia periparto; na miocardite aguda, , , , na miocardite fulminante e nas miocardites virais. , Embora alguns desses estudos apontem para potencial benefício do uso de imunoglobulina, estudo randomizado controlado em pacientes adultos com cardiomiopatia dilatada de início recente (<6 meses) ou miocardite não demonstrou benefício do uso de imunoglobulina em relação à função ventricular quando comparado ao grupo controle. Houve melhora da função ventricular e até mesmo normalização em 36% dos casos ao longo do seguimento, independentemente do grupo de tratamento. Vale destacar que não foi realizada pesquisa viral na biópsia dos pacientes, e apenas 16% tinham quadro de miocardite comprovado por presença de inflamação na biópsia. Em pacientes com miocardite aguda, os primeiros estudos apontavam para melhora da função ventricular e tendência a melhor prognóstico em 1 ano, com o uso de altas doses de IVIG. No entanto, revisão sistemática realizada em 2005, envolvendo 62 estudos, encontrou apenas um estudo randomizado controlado sobre o tema, e não demonstrou benefício do uso da terapêutica em pacientes com miocardite aguda, concluindo serem insuficientes as evidências para recomendação rotineira do uso de IVIG nesse cenário. Mais recentemente, estudo multicêntrico randomizado pequeno (41 pacientes) avaliou o prognóstico em curto prazo de pacientes com miocardite aguda ou cardiomiopatia de início recente, submetidos ao tratamento com IVIG comparado com pacientes que não receberam IVIG e revelou melhor sobrevida em curto prazo entre os pacientes que receberam IVIG sem diferença em relação à melhora da função ventricular que ocorreu nos dois grupos. No entanto, houve redução significativa de citocinas inflamatórias no grupo tratado. Tal estudo levanta a hipótese de potencial benefício das imunoglobulinas e sugere o mecanismo pelo qual tal benefício pode ser observado; no entanto, devido ao pequeno número de pacientes, o estudo não é suficiente para embasar a recomendação da terapêutica de forma irrestrita para pacientes com miocardite aguda. No entanto, nas miocardites virais, há dados de literatura que demonstram benefício do uso de imunoglobulina. Em estudo piloto em pacientes com miocardite por parvovírus B19, o uso de IVIG gerou redução significativa da carga viral e melhorou a função cardíaca. Em outra análise incluindo 152 pacientes com miocardite por adenovírus ou parvovírus B19, o uso de imunoglobulina também mostrou melhora da capacidade para o exercício; melhora da fração de ejeção de VE e melhora da classe funcional. Houve redução significativa da inflamação nos dois grupos de pacientes e redução expressiva da carga viral apenas entre os pacientes com miocardite por adenovírus; pacientes com parvovirose apresentaram persistência viral em torno de 40%. Esses dados sugerem potencial benefício de uso de imunoglobulina em pacientes com miocardite viral com comprovação por BEM. Os dados atuais, embora não sejam suficientes para recomendação rotineira da terapêutica, apontam para potencial benefício da imunoglobulina em pacientes com miocardite com inflamação comprovada por biópsia, especialmente nas miocardites virais por adenovírus e parvovírus B19. 5.4.1. Imunoadsorção A patogênese da progressão para disfunção ventricular na cardiomiopatia dilatada envolve processos inflamatórios que podem ser identificados e quantificados por métodos imuno-histoquímicos, o que sugere relação causal entre miocardite e cardiomiopatia. A presença de linfócitos, células mononucleares e aumento da expressão gênica de antígenos HLA é frequente, assim como anticorpos contra proteínas mitocondriais e de contratilidade; receptores B1 e receptores muscarínicos também têm sido descritos na cardiomiopatia dilatada. - A extração desses anticorpos cardíacos é possível por imunoadsorção, e alguns estudos têm testado a eficácia dessa metodologia no tratamento de pacientes com cardiomiopatia dilatada/miocardite. , Em estudo controlado pequeno, 25 pacientes foram randomizados para realizar imunoadsorção seguida de substituição por IgG ou manter tratamento padrão sendo observada redução significativa de inflamação miocárdica (células CD3; linfócitos CD4 e CD8, além de reduzir a expressão antígenos HLA classe II) no grupo tratado. Em outros estudos pequenos randomizados, observa-se melhora da hemodinâmica e da função ventricular. Dados atuais sugerem que imunoadsorção pode ser uma abordagem terapêutica nova e promissora para pacientes com cardiomiopatia dilatada e presença de anticorpos cardíacos. Contudo, até o momento, as evidências baseiam-se em estudos pequenos não controlados ou estudos controlados abertos comparados à terapia convencional, que precisam ter seus resultados confirmados por grandes estudos multicêntricos prospectivos randomizados. No momento, está em andamento um estudo multicêntrico duplo-cego placebo-controlado que tem por objetivo avaliar os efeitos da imunoadsorção seguida de substituição por IgG em pacientes com cardiomiopatia dilatada. Apenas após os resultados deste grande estudo poderemos estabelecer grau de recomendação para essa terapêutica no contexto da cardiomiopatia dilatada/miocardite.
A patogênese da progressão para disfunção ventricular na cardiomiopatia dilatada envolve processos inflamatórios que podem ser identificados e quantificados por métodos imuno-histoquímicos, o que sugere relação causal entre miocardite e cardiomiopatia. A presença de linfócitos, células mononucleares e aumento da expressão gênica de antígenos HLA é frequente, assim como anticorpos contra proteínas mitocondriais e de contratilidade; receptores B1 e receptores muscarínicos também têm sido descritos na cardiomiopatia dilatada. - A extração desses anticorpos cardíacos é possível por imunoadsorção, e alguns estudos têm testado a eficácia dessa metodologia no tratamento de pacientes com cardiomiopatia dilatada/miocardite. , Em estudo controlado pequeno, 25 pacientes foram randomizados para realizar imunoadsorção seguida de substituição por IgG ou manter tratamento padrão sendo observada redução significativa de inflamação miocárdica (células CD3; linfócitos CD4 e CD8, além de reduzir a expressão antígenos HLA classe II) no grupo tratado. Em outros estudos pequenos randomizados, observa-se melhora da hemodinâmica e da função ventricular. Dados atuais sugerem que imunoadsorção pode ser uma abordagem terapêutica nova e promissora para pacientes com cardiomiopatia dilatada e presença de anticorpos cardíacos. Contudo, até o momento, as evidências baseiam-se em estudos pequenos não controlados ou estudos controlados abertos comparados à terapia convencional, que precisam ter seus resultados confirmados por grandes estudos multicêntricos prospectivos randomizados. No momento, está em andamento um estudo multicêntrico duplo-cego placebo-controlado que tem por objetivo avaliar os efeitos da imunoadsorção seguida de substituição por IgG em pacientes com cardiomiopatia dilatada. Apenas após os resultados deste grande estudo poderemos estabelecer grau de recomendação para essa terapêutica no contexto da cardiomiopatia dilatada/miocardite.
5.5.1. Sem Disfunção Ventricular A abordagem terapêutica dos pacientes com miocardite com função ventricular preservada tem como objetivo a prevenção do desenvolvimento de disfunção ventricular ou de arritmias malignas. Nos pacientes com suspeita diagnóstica e risco intermediário, podemos utilizar betabloqueadores e inibidores da enzima conversora de angiotensina (IECA) ou bloqueadores dos receptores de angiotensina (BRA) pelo período mínimo de 12 meses, com objetivos de redução da mortalidade e morbidade. A decisão de manutenção da terapêutica além desse período será de acordo com a avaliação da função ventricular e potencial arritmogênico. Como não foram realizados ensaios clínicos em pacientes com esse perfil de miocardite, o manuseio do tratamento deve seguir as orientações da diretriz de insuficiência cardíaca crônica e aguda pela SBC. 5.5.2. Com Disfunção Ventricular Hemodinâmica Estável O manejo terapêutico da disfunção ventricular na miocardite deve estar alinhado com as diretrizes atuais de IC. , , As medicações recomendadas para todos os pacientes com disfunção ventricular sintomática e hemodinamicamente estáveis, como terapia cardioprotetora, salvo contraindicações, são conhecidas como terapia tripla – IECA ou BRA, betabloqueadores e antagonistas dos receptores mineralocorticoides. Os IECA/BRA e betabloqueadores podem ser iniciados em todos os indivíduos com ICFER mesmo que assintomáticos, salvo contraindicações, e devem ser mantidos quando ocorre normalização da função ventricular. A espironolactona, representante dos antagonistas de receptores mineralocorticoides no Brasil, deve ser iniciada quando o paciente já está em uso das demais medicações, mantendo sintomas (CF NYHA II-IV), devendo ser evitada em pacientes com creatinina >2,5 mg/dL ou com hipercalemia persistente ( ). 5.5.3. Paciente com Disfunção Ventricular e Hemodinâmica Instável: Abordagem Terapêutica Pacientes com miocardite aguda e presença de disfunção ventricular sistólica podem apresentar-se em distintos modelos clínicos. Assim como a resposta clínica à terapêutica é bastante variável, podendo ou não haver manifestação clara de baixo débito clínico ou evidência de hipervolemia sistêmica. O uso de inotrópicos se justifica, pelo menos em três situações: no contexto claro de baixo débito, em síndrome cardiorrenal em refratariedade à otimização diurética e na presença de SVO2 abaixo de 60% com critérios hemodinâmicos invasivos de baixo débito. Conforme a dinâmica do cuidado, deve-se discutir o monitoramento invasivo para os pacientes sem resposta clara à essa terapia ( ). -
A abordagem terapêutica dos pacientes com miocardite com função ventricular preservada tem como objetivo a prevenção do desenvolvimento de disfunção ventricular ou de arritmias malignas. Nos pacientes com suspeita diagnóstica e risco intermediário, podemos utilizar betabloqueadores e inibidores da enzima conversora de angiotensina (IECA) ou bloqueadores dos receptores de angiotensina (BRA) pelo período mínimo de 12 meses, com objetivos de redução da mortalidade e morbidade. A decisão de manutenção da terapêutica além desse período será de acordo com a avaliação da função ventricular e potencial arritmogênico. Como não foram realizados ensaios clínicos em pacientes com esse perfil de miocardite, o manuseio do tratamento deve seguir as orientações da diretriz de insuficiência cardíaca crônica e aguda pela SBC.
O manejo terapêutico da disfunção ventricular na miocardite deve estar alinhado com as diretrizes atuais de IC. , , As medicações recomendadas para todos os pacientes com disfunção ventricular sintomática e hemodinamicamente estáveis, como terapia cardioprotetora, salvo contraindicações, são conhecidas como terapia tripla – IECA ou BRA, betabloqueadores e antagonistas dos receptores mineralocorticoides. Os IECA/BRA e betabloqueadores podem ser iniciados em todos os indivíduos com ICFER mesmo que assintomáticos, salvo contraindicações, e devem ser mantidos quando ocorre normalização da função ventricular. A espironolactona, representante dos antagonistas de receptores mineralocorticoides no Brasil, deve ser iniciada quando o paciente já está em uso das demais medicações, mantendo sintomas (CF NYHA II-IV), devendo ser evitada em pacientes com creatinina >2,5 mg/dL ou com hipercalemia persistente ( ).
Pacientes com miocardite aguda e presença de disfunção ventricular sistólica podem apresentar-se em distintos modelos clínicos. Assim como a resposta clínica à terapêutica é bastante variável, podendo ou não haver manifestação clara de baixo débito clínico ou evidência de hipervolemia sistêmica. O uso de inotrópicos se justifica, pelo menos em três situações: no contexto claro de baixo débito, em síndrome cardiorrenal em refratariedade à otimização diurética e na presença de SVO2 abaixo de 60% com critérios hemodinâmicos invasivos de baixo débito. Conforme a dinâmica do cuidado, deve-se discutir o monitoramento invasivo para os pacientes sem resposta clara à essa terapia ( ). -
A miocardite é uma importante causa de morte súbita em atletas, podendo ocorrer tanto na sua fase aguda como na fase crônica. Está relacionada não só ao grau de inflamação do miocárdio, mas também à deflagração de arritmias complexas e ao desenvolvimento de disfunção ventricular esquerda. - Atletas competitivos ou recreacionais portadores de miocardite ativa não devem praticar esportes competitivos ou exercícios físicos de alta intensidade até o término do período de convalescença. Não há consenso sobre esste período. Até recentemente, era estabelecido um período de, no mínimo, 6 meses após o início das manifestações clínicas. Atualmente, alguns especialistas já recomendam períodos menores, como de 3 meses, para a liberação de treinamentos e competições, dependendo da presença de sintomas, arritmias, disfunção ventricular, marcadores inflamatórios e alterações no ECG , ( ). O Consenso Europeu de Reabilitação Cardíaca e Prevenção recomenda que, nos pacientes portadores de IC, incluindo os indivíduos com miocardite, a prática de exercícios físicos deve ser de moderada intensidade (até 50% do VO2 pico ou 60% da frequência cardíaca máxima prevista), desde que não haja evidência laboratorial de inflamação ou arritmias. Em função da pandemia pela Covid-19, os atletas profissionais necessitaram interromper ou postergar suas atividades profissionais pelo risco de contaminação. Com abrandamento das medidas de afastamento, temos o questionamento de como os atletas poderão retornar suas atividades de forma segura. Os atletas que foram acometidos pela Covid-19 podem vir a apresentar sintomas respiratórios, fadiga muscular e risco de eventos trombóticos. Em decorrências de tais riscos, um fluxograma com recomendações de avaliação clínica e de liberação de atividades tem o objetivo de fornecer um guia para retomada das atividades físicas ( ). A vacinação segue as mesmas recomendações da imunização anual contra gripe e pneumococo feitas nos pacientes com IC e as demais vacinas disponíveis (caxumba, sarampo, rubéola, poliomielite). Não há evidências robustas de que estas predispõem a agudização ou o desenvolvimento de miocardite aguda para sobrepor os benefícios da imunização. - O mesmo racional se aplica na vacinação para Covid-19. Para serem vacinados, os pacientes não podem estar na fase aguda da miocardite, sendo o mais aconselhável cerca de 3 meses após o diagnóstico de miocardite ( ).
6.1. Miocardite Fulminante Miocardite fulminante pode ser definida contempora- neamente de forma pragmática, contemplando uma visão predominantemente clínica, independentemente de achados histológicos, em que existe: 1) apresentação clínica de sintomas graves de IC inferior a 30 dias; 2) instabilidade hemodinâmica com choque cardiogênico e arritmias com risco de vida (incluindo parada cardiorrespiratória recuperada ou abortada); e 3) necessidade de suporte hemodinâmico (inotrópicos ou assistência circulatória mecânica). Além dos exames já citados recomendados em casos de miocardite, o uso da BEM na miocardite fulminante é recomendado, sendo usualmente positivo, demonstrando múltiplos focos inflamatórios, possibilitando caracterização histológica do tipo de miocardite em curso. O curso clínico da miocardite fulminante é usualmente mais sombrio que outros tipos de miocardite não fulminantes, com menor chance de recuperação da função ventricular, maior mortalidade e maior chance de transplante cardíaco. , 6.1.1. Avaliação Diagnóstica O diagnóstico de miocardite fulminante envolve os critérios diagnósticos de miocardite per se envolvendo quadro clínico de IC aguda, elevação de troponinas e de marcadores inflamatórios, alterações inespecíficas no ECG, como inversões de onda T e/ou alterações de segmento ST, e alteração aguda da função ventricular. No cenário de choque cardiogênico, cateterismo cardíaco direito e angiografia coronária são essenciais para orientar o manejo. A ecocardiografia é ferramenta central no diagnóstico, uma vez que os pacientes com miocardite fulminante frequentemente não apresentam condições para submeterem-se à RM. Os achados ecocardiográficos são altamente dependentes da forma e do tempo de apresentação do paciente. Os pacientes com miocardite fulminante, em geral, apresentam dimensões diastólicas normais, mas aumento na espessura septal na apresentação, enquanto pacientes com miocardite viral aguda (não fulminante) podem apresentar-se com dimensões diastólicas tanto normal quanto aumentadas, mas espessura septal normal, consistente com outras formas de miocardiopatia dilatada. , , , , , A decisão de realizar uma BEM no momento do cateterismo cardíaco está conforme as da força-tarefa de 2013 da ESC 15 A BEM pode ser considerada o procedimento diagnóstico inicial quando a RM não é possível (p. ex., choque, presença de dispositivos de metal), se operadores experientes e patologistas cardíacos estão disponíveis. De acordo com as diretrizes, portanto, as indicações para BEM estariam presentes para a maioria dos pacientes com miocardite fulminante ( ). Mais precisão pode ser alcançada quando adicionados análise do genoma viral, imuno-histologia ou biomarcadores transcriptômicos se houver incerteza diagnóstica apesar da histologia. Além da confirmação diagnóstica, a realização de BEM na miocardite fulminante pode ser decisiva para definição terapêutica. A avaliação imuno-histoquímica tem sido considerada obrigatória em função das conhecidas limitações diagnósticas dos critérios de Dallas, principalmente variabilidade interobservador, que, estima-se, vem trazer confirmação diagnóstica em, no máximo, 20% dos casos. , , , , De acordo com definição da OMS, para diagnóstico de miocardite ativa, é necessária a detecção imuno-histoquímica de infiltrados mononucleares (linfócitos T ou macrófagos) usando um ponto de corte de mais de 14 células/mm 2 , em adição à expressão aumentada de moléculas HLA classe II. A detecção de genoma viral nos espécimes da biópsia é factível (ainda pouco disponível no Brasil) e, quando acoplada à análise imuno-histoquímica, aumenta a acurácia diagnóstica, além de prover a etiologia e informação prognóstica. Para miocardites fulminantes, a indicação classe I, nível de evidência C, já era considerada mesmo quando levava-se em conta apenas a análise histológica (critérios de Dallas). A análise histológica convencional, amplamente disponível, permite diagnósticos etiológicos que podem levar a mudanças de condutas terapêutica e a tratamentos específicos, como em miocardites eosinofílicas necrotizantes, miocardites de células gigantes, sarcoidose, amiloidose e miocardites associadas a doenças autoimunes conhecidas. 6.1.2. Abordagem Terapêutica Do ponto de vista do tratamento específico da miocardite, o reconhecimento do fator causal por meio da investigação histológica por BEM permite o estabelecimento de estratégias terapêuticas específicas, como a utilização de imunoglobulina nas miocardites virais e imunossupressão nas autoimunes sem presença viral, ou o uso de corticosteroide em pacientes com sarcoidose, miocardite eosinofílica necrotizante ou miocardite por células gigantes. Um ensaio clínico randomizado de imunossupressão em 85 pacientes com miocardite com comprovada ausência de persistência viral (TIMIC Study) demonstrou claro benefício sobre a fração de ejeção desses pacientes. No entanto, tratavam-se de pacientes com mais de 6 meses de diagnóstico e comprovada ausência de vírus. Ensaios clínicos de imunossupressão em pacientes com miocardite fulminante não existem. Uma opção que tem sido testada é a utilização de altas doses de imunoglobulina, a qual se mostrou benéfica sobre a função ventricular e classe funcional e demonstrou benefício em sobrevida; , , embora tenha sido demonstrado em um ensaio clínico com 62 pacientes, em que apenas 16% tinham miocardite comprovada por biópsia a ausencia de benefício. O tratamento de suporte deve ser realizado com fármacos vasoativos e eventualmente vasopressores e em situações nas quais seja possível a introdução de vasodilatadores. O insucesso imediato no tratamento medicamentoso e acerto volêmico deve abrir perspectiva para indicação de suporte hemodinâmico com assistência circulatória. Os dispositivos mais utilizados são balão intra-aórtico, dispositivos percutâneos como tandem-heart e impella , circulação extracorpórea (ECMO) e ventrículos artificiais paracorpóreos, como ponte para recuperação ou ponte para transplante cardíaco ( ). Os dispositivos de curta duração têm sua indicação para suporte de 7 a 10 dias. Após esse período e quando não se consegue a estabilização do paciente, a indicação de ECMO ou ventrículos artificiais pode dar suporte por período maior, possibilitando mais chance de recuperação da disfunção ventricular (ver seção Choque cardiogênico ).
Miocardite fulminante pode ser definida contempora- neamente de forma pragmática, contemplando uma visão predominantemente clínica, independentemente de achados histológicos, em que existe: 1) apresentação clínica de sintomas graves de IC inferior a 30 dias; 2) instabilidade hemodinâmica com choque cardiogênico e arritmias com risco de vida (incluindo parada cardiorrespiratória recuperada ou abortada); e 3) necessidade de suporte hemodinâmico (inotrópicos ou assistência circulatória mecânica). Além dos exames já citados recomendados em casos de miocardite, o uso da BEM na miocardite fulminante é recomendado, sendo usualmente positivo, demonstrando múltiplos focos inflamatórios, possibilitando caracterização histológica do tipo de miocardite em curso. O curso clínico da miocardite fulminante é usualmente mais sombrio que outros tipos de miocardite não fulminantes, com menor chance de recuperação da função ventricular, maior mortalidade e maior chance de transplante cardíaco. , 6.1.1. Avaliação Diagnóstica O diagnóstico de miocardite fulminante envolve os critérios diagnósticos de miocardite per se envolvendo quadro clínico de IC aguda, elevação de troponinas e de marcadores inflamatórios, alterações inespecíficas no ECG, como inversões de onda T e/ou alterações de segmento ST, e alteração aguda da função ventricular. No cenário de choque cardiogênico, cateterismo cardíaco direito e angiografia coronária são essenciais para orientar o manejo. A ecocardiografia é ferramenta central no diagnóstico, uma vez que os pacientes com miocardite fulminante frequentemente não apresentam condições para submeterem-se à RM. Os achados ecocardiográficos são altamente dependentes da forma e do tempo de apresentação do paciente. Os pacientes com miocardite fulminante, em geral, apresentam dimensões diastólicas normais, mas aumento na espessura septal na apresentação, enquanto pacientes com miocardite viral aguda (não fulminante) podem apresentar-se com dimensões diastólicas tanto normal quanto aumentadas, mas espessura septal normal, consistente com outras formas de miocardiopatia dilatada. , , , , , A decisão de realizar uma BEM no momento do cateterismo cardíaco está conforme as da força-tarefa de 2013 da ESC 15 A BEM pode ser considerada o procedimento diagnóstico inicial quando a RM não é possível (p. ex., choque, presença de dispositivos de metal), se operadores experientes e patologistas cardíacos estão disponíveis. De acordo com as diretrizes, portanto, as indicações para BEM estariam presentes para a maioria dos pacientes com miocardite fulminante ( ). Mais precisão pode ser alcançada quando adicionados análise do genoma viral, imuno-histologia ou biomarcadores transcriptômicos se houver incerteza diagnóstica apesar da histologia. Além da confirmação diagnóstica, a realização de BEM na miocardite fulminante pode ser decisiva para definição terapêutica. A avaliação imuno-histoquímica tem sido considerada obrigatória em função das conhecidas limitações diagnósticas dos critérios de Dallas, principalmente variabilidade interobservador, que, estima-se, vem trazer confirmação diagnóstica em, no máximo, 20% dos casos. , , , , De acordo com definição da OMS, para diagnóstico de miocardite ativa, é necessária a detecção imuno-histoquímica de infiltrados mononucleares (linfócitos T ou macrófagos) usando um ponto de corte de mais de 14 células/mm 2 , em adição à expressão aumentada de moléculas HLA classe II. A detecção de genoma viral nos espécimes da biópsia é factível (ainda pouco disponível no Brasil) e, quando acoplada à análise imuno-histoquímica, aumenta a acurácia diagnóstica, além de prover a etiologia e informação prognóstica. Para miocardites fulminantes, a indicação classe I, nível de evidência C, já era considerada mesmo quando levava-se em conta apenas a análise histológica (critérios de Dallas). A análise histológica convencional, amplamente disponível, permite diagnósticos etiológicos que podem levar a mudanças de condutas terapêutica e a tratamentos específicos, como em miocardites eosinofílicas necrotizantes, miocardites de células gigantes, sarcoidose, amiloidose e miocardites associadas a doenças autoimunes conhecidas. 6.1.2. Abordagem Terapêutica Do ponto de vista do tratamento específico da miocardite, o reconhecimento do fator causal por meio da investigação histológica por BEM permite o estabelecimento de estratégias terapêuticas específicas, como a utilização de imunoglobulina nas miocardites virais e imunossupressão nas autoimunes sem presença viral, ou o uso de corticosteroide em pacientes com sarcoidose, miocardite eosinofílica necrotizante ou miocardite por células gigantes. Um ensaio clínico randomizado de imunossupressão em 85 pacientes com miocardite com comprovada ausência de persistência viral (TIMIC Study) demonstrou claro benefício sobre a fração de ejeção desses pacientes. No entanto, tratavam-se de pacientes com mais de 6 meses de diagnóstico e comprovada ausência de vírus. Ensaios clínicos de imunossupressão em pacientes com miocardite fulminante não existem. Uma opção que tem sido testada é a utilização de altas doses de imunoglobulina, a qual se mostrou benéfica sobre a função ventricular e classe funcional e demonstrou benefício em sobrevida; , , embora tenha sido demonstrado em um ensaio clínico com 62 pacientes, em que apenas 16% tinham miocardite comprovada por biópsia a ausencia de benefício. O tratamento de suporte deve ser realizado com fármacos vasoativos e eventualmente vasopressores e em situações nas quais seja possível a introdução de vasodilatadores. O insucesso imediato no tratamento medicamentoso e acerto volêmico deve abrir perspectiva para indicação de suporte hemodinâmico com assistência circulatória. Os dispositivos mais utilizados são balão intra-aórtico, dispositivos percutâneos como tandem-heart e impella , circulação extracorpórea (ECMO) e ventrículos artificiais paracorpóreos, como ponte para recuperação ou ponte para transplante cardíaco ( ). Os dispositivos de curta duração têm sua indicação para suporte de 7 a 10 dias. Após esse período e quando não se consegue a estabilização do paciente, a indicação de ECMO ou ventrículos artificiais pode dar suporte por período maior, possibilitando mais chance de recuperação da disfunção ventricular (ver seção Choque cardiogênico ).
O diagnóstico de miocardite fulminante envolve os critérios diagnósticos de miocardite per se envolvendo quadro clínico de IC aguda, elevação de troponinas e de marcadores inflamatórios, alterações inespecíficas no ECG, como inversões de onda T e/ou alterações de segmento ST, e alteração aguda da função ventricular. No cenário de choque cardiogênico, cateterismo cardíaco direito e angiografia coronária são essenciais para orientar o manejo. A ecocardiografia é ferramenta central no diagnóstico, uma vez que os pacientes com miocardite fulminante frequentemente não apresentam condições para submeterem-se à RM. Os achados ecocardiográficos são altamente dependentes da forma e do tempo de apresentação do paciente. Os pacientes com miocardite fulminante, em geral, apresentam dimensões diastólicas normais, mas aumento na espessura septal na apresentação, enquanto pacientes com miocardite viral aguda (não fulminante) podem apresentar-se com dimensões diastólicas tanto normal quanto aumentadas, mas espessura septal normal, consistente com outras formas de miocardiopatia dilatada. , , , , , A decisão de realizar uma BEM no momento do cateterismo cardíaco está conforme as da força-tarefa de 2013 da ESC 15 A BEM pode ser considerada o procedimento diagnóstico inicial quando a RM não é possível (p. ex., choque, presença de dispositivos de metal), se operadores experientes e patologistas cardíacos estão disponíveis. De acordo com as diretrizes, portanto, as indicações para BEM estariam presentes para a maioria dos pacientes com miocardite fulminante ( ). Mais precisão pode ser alcançada quando adicionados análise do genoma viral, imuno-histologia ou biomarcadores transcriptômicos se houver incerteza diagnóstica apesar da histologia. Além da confirmação diagnóstica, a realização de BEM na miocardite fulminante pode ser decisiva para definição terapêutica. A avaliação imuno-histoquímica tem sido considerada obrigatória em função das conhecidas limitações diagnósticas dos critérios de Dallas, principalmente variabilidade interobservador, que, estima-se, vem trazer confirmação diagnóstica em, no máximo, 20% dos casos. , , , , De acordo com definição da OMS, para diagnóstico de miocardite ativa, é necessária a detecção imuno-histoquímica de infiltrados mononucleares (linfócitos T ou macrófagos) usando um ponto de corte de mais de 14 células/mm 2 , em adição à expressão aumentada de moléculas HLA classe II. A detecção de genoma viral nos espécimes da biópsia é factível (ainda pouco disponível no Brasil) e, quando acoplada à análise imuno-histoquímica, aumenta a acurácia diagnóstica, além de prover a etiologia e informação prognóstica. Para miocardites fulminantes, a indicação classe I, nível de evidência C, já era considerada mesmo quando levava-se em conta apenas a análise histológica (critérios de Dallas). A análise histológica convencional, amplamente disponível, permite diagnósticos etiológicos que podem levar a mudanças de condutas terapêutica e a tratamentos específicos, como em miocardites eosinofílicas necrotizantes, miocardites de células gigantes, sarcoidose, amiloidose e miocardites associadas a doenças autoimunes conhecidas.
Do ponto de vista do tratamento específico da miocardite, o reconhecimento do fator causal por meio da investigação histológica por BEM permite o estabelecimento de estratégias terapêuticas específicas, como a utilização de imunoglobulina nas miocardites virais e imunossupressão nas autoimunes sem presença viral, ou o uso de corticosteroide em pacientes com sarcoidose, miocardite eosinofílica necrotizante ou miocardite por células gigantes. Um ensaio clínico randomizado de imunossupressão em 85 pacientes com miocardite com comprovada ausência de persistência viral (TIMIC Study) demonstrou claro benefício sobre a fração de ejeção desses pacientes. No entanto, tratavam-se de pacientes com mais de 6 meses de diagnóstico e comprovada ausência de vírus. Ensaios clínicos de imunossupressão em pacientes com miocardite fulminante não existem. Uma opção que tem sido testada é a utilização de altas doses de imunoglobulina, a qual se mostrou benéfica sobre a função ventricular e classe funcional e demonstrou benefício em sobrevida; , , embora tenha sido demonstrado em um ensaio clínico com 62 pacientes, em que apenas 16% tinham miocardite comprovada por biópsia a ausencia de benefício. O tratamento de suporte deve ser realizado com fármacos vasoativos e eventualmente vasopressores e em situações nas quais seja possível a introdução de vasodilatadores. O insucesso imediato no tratamento medicamentoso e acerto volêmico deve abrir perspectiva para indicação de suporte hemodinâmico com assistência circulatória. Os dispositivos mais utilizados são balão intra-aórtico, dispositivos percutâneos como tandem-heart e impella , circulação extracorpórea (ECMO) e ventrículos artificiais paracorpóreos, como ponte para recuperação ou ponte para transplante cardíaco ( ). Os dispositivos de curta duração têm sua indicação para suporte de 7 a 10 dias. Após esse período e quando não se consegue a estabilização do paciente, a indicação de ECMO ou ventrículos artificiais pode dar suporte por período maior, possibilitando mais chance de recuperação da disfunção ventricular (ver seção Choque cardiogênico ).
6.2.1. Diagnóstico A sarcoidose é uma doença inflamatória granulomatosa, de etiologia desconhecida, caracterizada por granulomas não caseosos, podendo acometer vários órgãos, especialmente: pulmões (90%), pele, linfonodos, sistema nervoso central, olhos, fígado, coração e outros órgãos. Embora a sarcoidose cardíaca clinicamente manifesta só ocorra em 5% a 10% dos pacientes com sarcoidose, estudos em autópsias revelaram que o envolvimento cardíaco está presente em 20% a 30% de estudos com imagem cardíaca avançada; com o uso de CMR ou PET, demonstraram valores de 40% de comprometimento cardíaco. - Além das diferenças de definições para ela, outro fator que parece impactar no aumento da prevalência dessa doença é o aprimoramento dos métodos de imagem. Atualmente, preconiza-se o uso das diretrizes da Sociedade Japonesa de Circulação (SJC) lançada em 2019 ( , e ). Dentre as mudanças sugeridas neste documento, temos que o acúmulo anormalmente alto de marcadores no coração com tomografia por emissão de pósitrons por 18 F-fluorodesoxiglucose (FDG)/tomografia computadorizada (FDG-PET/CT), que foi categorizado nas ‘’Diretrizes para o diagnóstico de envolvimento cardíaco em pacientes com sarcoidose’’, em 2006, foi promovido para os critérios maiores, bem como o realce tardio por gadolínio do miocárdio na RM com gadolínio. Nas atuais diretrizes da SJC, o paciente também é diagnosticado clinicamente com sarcoidose cardíaca quando demonstra achados clínicos fortemente sugestivos de comprometimento cardíaco e de sarcoidose pulmonar ou oftalmológica somados a, ao menos, dois dos cinco achados laboratoriais característicos da sarcoidose. Por fim, a definição da sarcoidose cardíaca isolada foi elaborada pela primeira vez. 6.2.2. Tratamento e Prognóstico O tratamento imunossupressor da sarcoidose cardíaca baseia-se na experiência clínica e na opinião de especialistas em que faltam estudos randomizados. O objetivo do tratamento é reduzir a atividade inflamatória e a prevenção de fibrose e deve ser guiado pela magnitude do processo inflamatório e o grau de acometimento miocárdico. Recomenda-se o tratamento imunossupressor nas seguintes situações: nos casos de disfunção ventricular esquerda, arritmias ventriculares, atividade hipermetabólica no PET-FDG, distúrbios de condução, realce tardio na RMC ou disfunção de ventrículo direito na ausência de hipertensão pulmonar. - Existem três linhas de tratamento na sarcoidose – primeira linha: corticosteroides; segunda linha: metotrexato e azatioprina nos casos intolerantes ou uso crônico de corticosteroides; e terceira linha: anticorpos anti-TNF (infliximab e andalimumab) nos casos de falha de tratamentos anteriores. O fármaco de escolha é o corticosteroide. Em uma revisão sistemática do uso de corticosteroide em pacientes com distúrbios de condução ventricular, 27 de 57 pacientes (47,4%) melhoraram após tratamento. No entanto, em vista da não previsibilidade de resposta, esses pacientes com distúrbios de condução e sarcoidose cardíaca devem receber um marca-passo ou cardiodesfibrilador implantável. , Estudos mais antigos que avaliaram o efeito do corticosteroide na função ventricular sugerem preservação da função ventricular nos casos de função normal ao diagnóstico, melhora da fração de ejeção ventricular nos casos de pacientes com disfunção leve a moderada e não melhora nos casos de disfunção ventricular importante. No entanto, por outro lado, um estudo finlandês sugere uma melhora da função ventricular esquerda com o tratamento imunossupressor nos casos de função ventricular severamente comprometida (FEVE<35%), mas sem alterações nos casos de função normal ou moderamente diminuída no início do tratamento. Talvez tais diferenças estejam no diagnóstico e tratamentos precoces. Nos casos de arritmia ventricular, os estudos são mais limitados; no entanto, a causa da arritmia parece ser secundária a cicatrizes e, talvez, o efeito do corticosteroide nesses pacientes seja pequeno benefício. A ablação por cateter nos casos de taquicardia ventricular pode ser considerada após o implante de cardiodesfibrilador implantável ou falência das medicações antiarrítmicas. O algoritmo de tratamento ( ) sugerido seria de doses iniciais de prednisona (30 mg/dia a 40 mg/dia) seguido da repetição do PET entre 4 a 6 meses, com o objetivo de avaliar a atividade da doença e guiar o tratamento farmacológico subsequente. Yokoyama et al. compararam o uso de PET 18 F-FDG/CT antes e após utilização de corticosteroide em 18 pacientes com sarcoidose cardíaca, e observaram que a SUV max diminuiu significativamente em comparação com valores basais. Estudo recente utilizou o PET 18 F-FDG/CT para diagnóstico e tratamento da sarcoidose cardíaca com doses baixas de corticosteroide e controle da doença em 1 ano do diagnóstico. Medicamentos imunossupressores outros que corticosteroide são necessários devido ao longo tempo do tratamento, e são indicados nos pacientes que necessitam de uma dose de manutenção de prednisona >10 mg/dia e que não toleram efeitos colaterais do corticosteroide. , São sugeridos: metotrexato, azatioprina, ciclofosfamida e inibidores do fator de necrose tumoral. , O tipo de fármaco utilizado pode ser determinado pelo tipo de acometimento extracardíaco; COMO evitar metotrexato, NO envolvimento hepático e estudos em pacientes com sarcoidose pulmonar, cutânea, ocular, neurológica e multissistêmica sugerem uma boa eficácia do infliximab ( ). 6.2.3. Prognóstico A sarcoidose cardíaca tem um pior prognóstico quando comparada à miocardiopatia dilatada. Uma vez o coração estando acometido, o prognóstico torna desfavorável. O comprometimento cardíaco é responsável por 85% dos óbitos na doença. , Kandolin et al. reportaram o efeito a longo prazo do tratamento imunossupressor na coorte finlandesa, e sobrevida livre de transplante em 1 ano, 5 anos e 10 anos foi 97%, 90% e 83%, respectivamente, durante o seguimento de 6,6 anos. Nesse estudo, a presença de IC e a função cardíaca antes do tratamento com corticosteroide foram os fatores mais importantes para estimativa do prognóstico, demonstrando que o tratamento precoce é importante. A presença de realce tardio miocárdico avaliado pela RM aumentou em 30 vezes o risco de morte, morte súbita abortada ou implante de cardiodesfibrilador em um período de seguimento de 2,6 anos, posteriormente confirmados em metanálises. Sugere que o limiar de 20% de massa de fibrose esteja associado com risco de eventos. Em um estudo que utilizou PET, observou-se que 26% dos eventos adversos relatados, tais como taquicardia ventricular e morte, ocorreram nos casos de captação cardíaca ao PET em um seguimento de 1,5 ano. Por outro lado, a captação extracardíaca não se associou com eventos adversos no seguimento. Outro dado interessante é que pacientes com sarcoidose cardíaca isolada têm pior prognóstico quando comparados com pacientes com sarcoidose sistêmica com comprometimento cardíaco. Outro estudo finlandês observou elevada frequência de disfunção ventricular e anormalidades septais ao ecocardiograma e alta prevalência de realce tardio miocárdico pela ressonância e maior associação com sexo feminino e maior disfunção ventricular esquerda. Nesse estudo, a presença de IC na apresentação, disfunção ventricular esquerda severa (<35%) e sarcoidose cardíaca isolada também esteve relacionada com o prognóstico. O ecocardiograma com Strain (GLS <17,3) foi preditor independente de mortalidade, IC, hospitalização, novas arritmias e desenvolvimento de sarcoidose cardíaca. Já biomarcadores séricos como BNP estiveram relacionados com desenvolvimento de IC, e a troponina, com desenvolvimento de arritmias fatais, menor fração de ejeção e pior prognóstico.
A sarcoidose é uma doença inflamatória granulomatosa, de etiologia desconhecida, caracterizada por granulomas não caseosos, podendo acometer vários órgãos, especialmente: pulmões (90%), pele, linfonodos, sistema nervoso central, olhos, fígado, coração e outros órgãos. Embora a sarcoidose cardíaca clinicamente manifesta só ocorra em 5% a 10% dos pacientes com sarcoidose, estudos em autópsias revelaram que o envolvimento cardíaco está presente em 20% a 30% de estudos com imagem cardíaca avançada; com o uso de CMR ou PET, demonstraram valores de 40% de comprometimento cardíaco. - Além das diferenças de definições para ela, outro fator que parece impactar no aumento da prevalência dessa doença é o aprimoramento dos métodos de imagem. Atualmente, preconiza-se o uso das diretrizes da Sociedade Japonesa de Circulação (SJC) lançada em 2019 ( , e ). Dentre as mudanças sugeridas neste documento, temos que o acúmulo anormalmente alto de marcadores no coração com tomografia por emissão de pósitrons por 18 F-fluorodesoxiglucose (FDG)/tomografia computadorizada (FDG-PET/CT), que foi categorizado nas ‘’Diretrizes para o diagnóstico de envolvimento cardíaco em pacientes com sarcoidose’’, em 2006, foi promovido para os critérios maiores, bem como o realce tardio por gadolínio do miocárdio na RM com gadolínio. Nas atuais diretrizes da SJC, o paciente também é diagnosticado clinicamente com sarcoidose cardíaca quando demonstra achados clínicos fortemente sugestivos de comprometimento cardíaco e de sarcoidose pulmonar ou oftalmológica somados a, ao menos, dois dos cinco achados laboratoriais característicos da sarcoidose. Por fim, a definição da sarcoidose cardíaca isolada foi elaborada pela primeira vez.
O tratamento imunossupressor da sarcoidose cardíaca baseia-se na experiência clínica e na opinião de especialistas em que faltam estudos randomizados. O objetivo do tratamento é reduzir a atividade inflamatória e a prevenção de fibrose e deve ser guiado pela magnitude do processo inflamatório e o grau de acometimento miocárdico. Recomenda-se o tratamento imunossupressor nas seguintes situações: nos casos de disfunção ventricular esquerda, arritmias ventriculares, atividade hipermetabólica no PET-FDG, distúrbios de condução, realce tardio na RMC ou disfunção de ventrículo direito na ausência de hipertensão pulmonar. - Existem três linhas de tratamento na sarcoidose – primeira linha: corticosteroides; segunda linha: metotrexato e azatioprina nos casos intolerantes ou uso crônico de corticosteroides; e terceira linha: anticorpos anti-TNF (infliximab e andalimumab) nos casos de falha de tratamentos anteriores. O fármaco de escolha é o corticosteroide. Em uma revisão sistemática do uso de corticosteroide em pacientes com distúrbios de condução ventricular, 27 de 57 pacientes (47,4%) melhoraram após tratamento. No entanto, em vista da não previsibilidade de resposta, esses pacientes com distúrbios de condução e sarcoidose cardíaca devem receber um marca-passo ou cardiodesfibrilador implantável. , Estudos mais antigos que avaliaram o efeito do corticosteroide na função ventricular sugerem preservação da função ventricular nos casos de função normal ao diagnóstico, melhora da fração de ejeção ventricular nos casos de pacientes com disfunção leve a moderada e não melhora nos casos de disfunção ventricular importante. No entanto, por outro lado, um estudo finlandês sugere uma melhora da função ventricular esquerda com o tratamento imunossupressor nos casos de função ventricular severamente comprometida (FEVE<35%), mas sem alterações nos casos de função normal ou moderamente diminuída no início do tratamento. Talvez tais diferenças estejam no diagnóstico e tratamentos precoces. Nos casos de arritmia ventricular, os estudos são mais limitados; no entanto, a causa da arritmia parece ser secundária a cicatrizes e, talvez, o efeito do corticosteroide nesses pacientes seja pequeno benefício. A ablação por cateter nos casos de taquicardia ventricular pode ser considerada após o implante de cardiodesfibrilador implantável ou falência das medicações antiarrítmicas. O algoritmo de tratamento ( ) sugerido seria de doses iniciais de prednisona (30 mg/dia a 40 mg/dia) seguido da repetição do PET entre 4 a 6 meses, com o objetivo de avaliar a atividade da doença e guiar o tratamento farmacológico subsequente. Yokoyama et al. compararam o uso de PET 18 F-FDG/CT antes e após utilização de corticosteroide em 18 pacientes com sarcoidose cardíaca, e observaram que a SUV max diminuiu significativamente em comparação com valores basais. Estudo recente utilizou o PET 18 F-FDG/CT para diagnóstico e tratamento da sarcoidose cardíaca com doses baixas de corticosteroide e controle da doença em 1 ano do diagnóstico. Medicamentos imunossupressores outros que corticosteroide são necessários devido ao longo tempo do tratamento, e são indicados nos pacientes que necessitam de uma dose de manutenção de prednisona >10 mg/dia e que não toleram efeitos colaterais do corticosteroide. , São sugeridos: metotrexato, azatioprina, ciclofosfamida e inibidores do fator de necrose tumoral. , O tipo de fármaco utilizado pode ser determinado pelo tipo de acometimento extracardíaco; COMO evitar metotrexato, NO envolvimento hepático e estudos em pacientes com sarcoidose pulmonar, cutânea, ocular, neurológica e multissistêmica sugerem uma boa eficácia do infliximab ( ).
A sarcoidose cardíaca tem um pior prognóstico quando comparada à miocardiopatia dilatada. Uma vez o coração estando acometido, o prognóstico torna desfavorável. O comprometimento cardíaco é responsável por 85% dos óbitos na doença. , Kandolin et al. reportaram o efeito a longo prazo do tratamento imunossupressor na coorte finlandesa, e sobrevida livre de transplante em 1 ano, 5 anos e 10 anos foi 97%, 90% e 83%, respectivamente, durante o seguimento de 6,6 anos. Nesse estudo, a presença de IC e a função cardíaca antes do tratamento com corticosteroide foram os fatores mais importantes para estimativa do prognóstico, demonstrando que o tratamento precoce é importante. A presença de realce tardio miocárdico avaliado pela RM aumentou em 30 vezes o risco de morte, morte súbita abortada ou implante de cardiodesfibrilador em um período de seguimento de 2,6 anos, posteriormente confirmados em metanálises. Sugere que o limiar de 20% de massa de fibrose esteja associado com risco de eventos. Em um estudo que utilizou PET, observou-se que 26% dos eventos adversos relatados, tais como taquicardia ventricular e morte, ocorreram nos casos de captação cardíaca ao PET em um seguimento de 1,5 ano. Por outro lado, a captação extracardíaca não se associou com eventos adversos no seguimento. Outro dado interessante é que pacientes com sarcoidose cardíaca isolada têm pior prognóstico quando comparados com pacientes com sarcoidose sistêmica com comprometimento cardíaco. Outro estudo finlandês observou elevada frequência de disfunção ventricular e anormalidades septais ao ecocardiograma e alta prevalência de realce tardio miocárdico pela ressonância e maior associação com sexo feminino e maior disfunção ventricular esquerda. Nesse estudo, a presença de IC na apresentação, disfunção ventricular esquerda severa (<35%) e sarcoidose cardíaca isolada também esteve relacionada com o prognóstico. O ecocardiograma com Strain (GLS <17,3) foi preditor independente de mortalidade, IC, hospitalização, novas arritmias e desenvolvimento de sarcoidose cardíaca. Já biomarcadores séricos como BNP estiveram relacionados com desenvolvimento de IC, e a troponina, com desenvolvimento de arritmias fatais, menor fração de ejeção e pior prognóstico.
6.3.1. Tratamento De acordo com Registro Internacional, a MCG é etiologia de 12% das miocardites fulminantes e 3,6% das miocardites não fulminantes. Os alvos do tratamento são limitados porque não são conhecidos adequadamente os mecanismos da MCG, embora um mecanismo autoimune envolvendo inflamação miocárdica mediada por linfócitos-T tenha sido proposto. , A MCG tem um prognóstico pior que as miocardites eosinofílicas e linfocitárias e está mais frequentemente associada à IC, parada cardíaca, fibrilação e taquicardia ventricular, bloqueios ou simulação de IAM. , Sem tratamento, a evolução geralmente é fatal, com morte até os 5,5 meses de evolução. Mesmo com tratamento, a MCG tem alta mortalidade ou necessidade de indicação precoce de suporte mecânico circulatório e/ou transplante cardíaco. Recentemente, foi descrita sobrevida livre de transplante aos 5 anos de 42%. Como importantes marcadores de prognóstico de morte precoce ou necessidade de suporte mecânico ou transplante cardíaco, foram descritos os níveis de troponina e moderada/severa necrose ou fibrose na BEM. Também são marcadores de prognóstico níveis elevados de BNP/nt-proBNP e redução importante de FEVE. O prognóstico reservado pode ser devido à lesão miocárdica ou recorrência da MCG. Após transplante cardíaco, também tem sido descrita recorrência da MCG. O diagnóstico precoce é crítico e baseia-se nos resultados da BEM, ou análise histológica de coração explantado durante transplante cardíaco, ou de fragmento de miocárdio obtido durante implante de dispositivo de assistência ventricular. , , A sensibilidade da biópsia pode ser limitada pelo erro de amostragem. Fragmentos são obtidos preferencialmente da porção apical do septo do ventrículo direito, porque diminui o risco de complicações. Uma biópsia negativa não necessariamente exclui o diagnóstico de MCG. A sensibilidade da BEM aumentou de 68% para 93% depois de repetir o procedimento ( ). O tratamento da MCG pode ser dividido em tratamento da IC com FEVEr provocada pela lesão miocárdica ou recorrência da MCG, das arritmias, bloqueios e o tratamento do provável mecanismo com imunossupressores. O tratamento da IC, dos distúrbios hemodinâmicos, bloqueios e arritmias segue as mesmas orientações do tratamento da IC segundo as Diretrizes da SBC, quer seja medicamentoso e/ou com inotrópicos, marca-passo/desfibriladores e/ou suporte mecânico circulatório e transplante cardíaco. O transplante cardíaco pode ter indicação mais precoce devido ao prognóstico reservado da MCG, mesmo com imunossupressores. A indicação de implante de cardiodesfibrilador pode ser feita para prevenção primária de morte súbita ou secundária com base na alta incidência de arritmias complexas e graves. Foi descrito que 59% dos pacientes com MCG apresentaram taquicardica ventricular sustentada ou choques para arritmia ventricular complexa, apesar de estarem livres de IC grave. A indicação de imunossupressores está baseada em resultados de série de casos ou de pequenos estudos randomizados, e foram utilizadas medicações imunossupressoras como prednisona, ciclosporina, azatioprina, micofenolato, everolimus, sirolimus ou globulina de coelho, globulina antitimocitária ou soro muromonab-CD3 para citólise de linfócito T. Após o diagnóstico inicial, em geral, utilizam-se corticosteroides em altas doses e/ou globulina de coelho, globulina antitimocitária ou soro muromonab-CD3, podendo já associar medicação para imunossupressão crônica. O uso de hemoadsorção também tem sido relatado ( ). Em geral a imunosssupressão de manutenção é baseada na ciclosporina em esquema duplo ou triplo. , Entretanto, existem importantes limitações na avaliação do seu real benefício. Combinações das medicações prednisona, ciclosporina, azatioprina, micofenolato ou uso isolado ou combinado com RATG ou soro muromonab-CD3 têm sido feitos. Foi descrito que imunossupressão tripla pode aumentar a chance de estar vivo livre de transplante cardíaco para 58% aos 5 anos. Contudo, tem que ser mantida a imunossupressão pela possibilidade de haver recorrência. A imunossupressão combinada (prednisona, ciclosporina e azatioprina) parece ser mais aceita, embora outras combinações tenham sido utilizadas, tais como ciclosporina com RATG, ou RATG com corticosteroides em altas doses. Não existem estudos comparativos para confirmar a melhor imunossupressão. , A utilização de ciclosporina associada a corticosteroides em altas dose ou muromonab-CD3 por 4 semanas diminui necrose, inflamação celular e células gigantes. Transplante cardíaco está indicado com melhora da sobrevida a médio prazo, mas pode haver recorrência de 20% a 25%. , É o tratamento de escolha, apesar de maior risco de rejeição. 6.3.2. Manifestação Clínica e Diagnóstico A miocardite de células gigantes é reconhecida como uma doença rápida e progressiva, na maioria das vezes fatal, se o paciente não for submetido a transplante cardíaco. Em boa parte dos casos, é associada a processo autoimune. Dados do Giant Cell Myocarditis Study Group mostraram uma incidência predominante em adultos jovens, brancos, sem predomínio de sexo e com manifestação principal de IC aguda (75% dos casos), porém metade dos pacientes desenvolveu arritmia ventricular complexa na evolução da doença. A sobrevida média livre de transplante cardíaco foi de 5,5 meses. Registro mais recente sobre miocardite de células gigantes mostrou incidência também em adultos jovens, mulheres, e as principais manifestações clínicas foram IC aguda, BAV e arritmias ventriculares. Exames de imagem não apresentam nenhuma alteração específica na miocardite de células gigantes. O diagnóstico baseia-se nos achados característicos da BEM com infiltrado inflamatório difuso e misto, constituído principalmente por macrófagos, seguido em quantidade por linfócitos e células gigantes multinucleadas derivadas de macrófagos, tipicamente dispersas, e, ainda, com menor representação de eosinófilos e células plasmáticas.
De acordo com Registro Internacional, a MCG é etiologia de 12% das miocardites fulminantes e 3,6% das miocardites não fulminantes. Os alvos do tratamento são limitados porque não são conhecidos adequadamente os mecanismos da MCG, embora um mecanismo autoimune envolvendo inflamação miocárdica mediada por linfócitos-T tenha sido proposto. , A MCG tem um prognóstico pior que as miocardites eosinofílicas e linfocitárias e está mais frequentemente associada à IC, parada cardíaca, fibrilação e taquicardia ventricular, bloqueios ou simulação de IAM. , Sem tratamento, a evolução geralmente é fatal, com morte até os 5,5 meses de evolução. Mesmo com tratamento, a MCG tem alta mortalidade ou necessidade de indicação precoce de suporte mecânico circulatório e/ou transplante cardíaco. Recentemente, foi descrita sobrevida livre de transplante aos 5 anos de 42%. Como importantes marcadores de prognóstico de morte precoce ou necessidade de suporte mecânico ou transplante cardíaco, foram descritos os níveis de troponina e moderada/severa necrose ou fibrose na BEM. Também são marcadores de prognóstico níveis elevados de BNP/nt-proBNP e redução importante de FEVE. O prognóstico reservado pode ser devido à lesão miocárdica ou recorrência da MCG. Após transplante cardíaco, também tem sido descrita recorrência da MCG. O diagnóstico precoce é crítico e baseia-se nos resultados da BEM, ou análise histológica de coração explantado durante transplante cardíaco, ou de fragmento de miocárdio obtido durante implante de dispositivo de assistência ventricular. , , A sensibilidade da biópsia pode ser limitada pelo erro de amostragem. Fragmentos são obtidos preferencialmente da porção apical do septo do ventrículo direito, porque diminui o risco de complicações. Uma biópsia negativa não necessariamente exclui o diagnóstico de MCG. A sensibilidade da BEM aumentou de 68% para 93% depois de repetir o procedimento ( ). O tratamento da MCG pode ser dividido em tratamento da IC com FEVEr provocada pela lesão miocárdica ou recorrência da MCG, das arritmias, bloqueios e o tratamento do provável mecanismo com imunossupressores. O tratamento da IC, dos distúrbios hemodinâmicos, bloqueios e arritmias segue as mesmas orientações do tratamento da IC segundo as Diretrizes da SBC, quer seja medicamentoso e/ou com inotrópicos, marca-passo/desfibriladores e/ou suporte mecânico circulatório e transplante cardíaco. O transplante cardíaco pode ter indicação mais precoce devido ao prognóstico reservado da MCG, mesmo com imunossupressores. A indicação de implante de cardiodesfibrilador pode ser feita para prevenção primária de morte súbita ou secundária com base na alta incidência de arritmias complexas e graves. Foi descrito que 59% dos pacientes com MCG apresentaram taquicardica ventricular sustentada ou choques para arritmia ventricular complexa, apesar de estarem livres de IC grave. A indicação de imunossupressores está baseada em resultados de série de casos ou de pequenos estudos randomizados, e foram utilizadas medicações imunossupressoras como prednisona, ciclosporina, azatioprina, micofenolato, everolimus, sirolimus ou globulina de coelho, globulina antitimocitária ou soro muromonab-CD3 para citólise de linfócito T. Após o diagnóstico inicial, em geral, utilizam-se corticosteroides em altas doses e/ou globulina de coelho, globulina antitimocitária ou soro muromonab-CD3, podendo já associar medicação para imunossupressão crônica. O uso de hemoadsorção também tem sido relatado ( ). Em geral a imunosssupressão de manutenção é baseada na ciclosporina em esquema duplo ou triplo. , Entretanto, existem importantes limitações na avaliação do seu real benefício. Combinações das medicações prednisona, ciclosporina, azatioprina, micofenolato ou uso isolado ou combinado com RATG ou soro muromonab-CD3 têm sido feitos. Foi descrito que imunossupressão tripla pode aumentar a chance de estar vivo livre de transplante cardíaco para 58% aos 5 anos. Contudo, tem que ser mantida a imunossupressão pela possibilidade de haver recorrência. A imunossupressão combinada (prednisona, ciclosporina e azatioprina) parece ser mais aceita, embora outras combinações tenham sido utilizadas, tais como ciclosporina com RATG, ou RATG com corticosteroides em altas doses. Não existem estudos comparativos para confirmar a melhor imunossupressão. , A utilização de ciclosporina associada a corticosteroides em altas dose ou muromonab-CD3 por 4 semanas diminui necrose, inflamação celular e células gigantes. Transplante cardíaco está indicado com melhora da sobrevida a médio prazo, mas pode haver recorrência de 20% a 25%. , É o tratamento de escolha, apesar de maior risco de rejeição.
A miocardite de células gigantes é reconhecida como uma doença rápida e progressiva, na maioria das vezes fatal, se o paciente não for submetido a transplante cardíaco. Em boa parte dos casos, é associada a processo autoimune. Dados do Giant Cell Myocarditis Study Group mostraram uma incidência predominante em adultos jovens, brancos, sem predomínio de sexo e com manifestação principal de IC aguda (75% dos casos), porém metade dos pacientes desenvolveu arritmia ventricular complexa na evolução da doença. A sobrevida média livre de transplante cardíaco foi de 5,5 meses. Registro mais recente sobre miocardite de células gigantes mostrou incidência também em adultos jovens, mulheres, e as principais manifestações clínicas foram IC aguda, BAV e arritmias ventriculares. Exames de imagem não apresentam nenhuma alteração específica na miocardite de células gigantes. O diagnóstico baseia-se nos achados característicos da BEM com infiltrado inflamatório difuso e misto, constituído principalmente por macrófagos, seguido em quantidade por linfócitos e células gigantes multinucleadas derivadas de macrófagos, tipicamente dispersas, e, ainda, com menor representação de eosinófilos e células plasmáticas.
6.4.1. Manifestações Clínicas e meios de Infecção, Reagudização nos Pacientes Imunossuprimidos Nos últimos anos, a doença de Chagas aguda (DCA) vem apresentando aumento no número de casos tanto por transmissão oral ou vetorial quanto por quadros de reativação da doença em países da América Latina. Os principais meios de infecção da DCA, atualmente, são: transmissão oral (68,4%), vetorial (5,9%), vertical (0,5%), transfusional (0,4%), acidental (0,1%) e desconhecida (24,7%), como descrito em série de casos diagnosticados na Amazônia Brasileira. A transmissão vetorial ocorre pelo hábito de os triatomíneos defecarem durante ou logo após a hematofagia, com a deposição de fezes contaminadas fazendo com que as formas infectantes do Trypanosoma cruzi atinjam a pele, mucosas e, posteriormente, a corrente sanguínea. O período de incubação é de 4 a 15 dias. A transmissão oral ocorre quando há a ingestão de alimentos ou bebidas contaminadas com parasitos. Atualmente, é a causa mais comum da doença aguda, ocasionando surtos em regiões endêmicas e não endêmicas. Seu período de incubação varia de 3 a 22 dias. Os casos de DCA podem cursar com sinais e sintomas inespecíficos de síndrome infecciosa, tais como febre, mialgias, edema de face e artralgias; além de sinais relacionados com a porta de entrada como o chagoma de inoculação e sinal de Romaña na forma vetorial e quadros digestivos, podendo ocorrer hemorragias digestivas na forma oral. Os casos agudos podem ou não cursar com miocardite e pericardite. Relatos de necrópsia mostram intensa inflamação aguda do epicárdico e miocárdio, observando-se atividade inflamatória intensa e difusa e dissociação extensa de fibras cardíacas, sendo observadas as formas amastigotas do parasita. Sinais e sintomas compatíveis com IC variaram de 26% a 58%. Podem ocorrer casos graves com tamponamento cardíaco e choque cardiogênico por disfunção sistólica de VE. A letalidade na forma de transmissão oral variou de 2% a 5% nas maiores séries. A presença de alterações cardíacas em exames complementares variou de 33% a 70% de alterações eletrocardiográficas (bloqueio de ramo direito, BAV de primeiro grau, fibrilação atrial aguda, bloqueio divisional anterossuperior) e de 13% a 52% de alterações ao ecocardiograma, com derrame pericárdico sendo a alteração mais frequente (10% a 82%), e alterações de contração segmentar, comuns na fase crônica, são pouco encontradas na fase aguda. Apesar da ocorrência de casos graves de comprometimento cardíaco, a maioria dos pacientes cursa com função sistólica preservada com poucos casos de redução da fração de ejeção, e a maioria dos óbitos ocorre devido à presença de derrame pericárdico importante e tamponamento cardíaco. , 6.4.2. Diagnóstico Os exames parasitológicos diretos são os mais indicados para o diagnóstico da miocardite aguda. Métodos indiretos, como a hemocultura e o xenodiagnóstico, têm baixa sensibilidade, não sendo ideais para utilização na fase aguda. Os exames sorológicos não são os melhores métodos para diagnóstico na fase aguda, mas podem ser feitos quando os exames parasitológicos diretos forem persistentemente negativos e a suspeita clínica persistir. A pesquisa a fresco do parasita no sangue circulante é rápida e simples, além de ser mais sensível que o esfregaço corado. A condição ideal de coleta é com o paciente ainda febril e dentro de 1 mês do início dos sintomas. Métodos de concentração (Strout, micro-hematócrito, creme leucocitário) são recomendados quando a pesquisa a fresco resultou negativa, por serem mais sensíveis. São empregados também quando o quadro clínico agudo começou há mais de 1 mês. Resultados negativos na primeira análise não devem ser considerados definitivos, principalmente se os sintomas persistirem, a não ser que outra etiologia seja comprovada. A PCR, sendo um método de diagnóstico molecular, vem se tornando mais importante para detectar infecção recente, visto que mostra resultados positivos dias a semanas antes que sejam detectadas tripomastigotas circulantes. - Pode ser feita em sangue periférico e no tecido obtido por BEM para detectar reativação precoce pós-transplante cardíaco, antes do aparecimento do quadro clínico ou de disfunção do enxerto. A reativação da doença de Chagas no período pós-transplante cardíaco pode acontecer em 19,6% a 45% dos casos. O quadro clínico pode ser de miocardite aguda, com vários graus de IC, frequentemente acompanhada de manifestações sistêmicas. Na pele, podem surgir eritema e nódulos subcutâneos, que devem ser biopsiados para pesquisa de ninhos de amastigotas. O monitoramento deve ser rotineiro, mesmo sem suspeita de reagudização. Quando não há sinais clínicos extracardíacos, a biópsia deve ser realizada. 6.4.3. Tratamento O tratamento tripanosomicida está indicado nos pacientes com DCA com ou sem manifestações de miocardite e na reativação da doença crônica devido à imunossupressão (transplantados) ( ). O benzonidazol é a droga disponível e recomendada para o tratamento da infeção pelo T. cruzi . As informações a respeito desse tema, no entanto, são escassas, baseadas em estudos não randomizados, com número de pacientes e tempo de observação insuficientes. Embora a definição sobre os critérios de cura da doença permaneça controversa, existe um consenso atual de que o tratamento com benzonidazol deve ser realizado nas formas agudas e que existe um provável benefício a longo prazo. A dose de benzonidazol em crianças é de 5 a 10mg/kg por dia, dividindo em duas tomadas, por 60 dias. Em adultos, a dose é de 5mg/kg. Reações adversas ocorrem em aproximadamente 30% dos pacientes, sendo as mais frequentes uma dermatite alérgica (30%) e uma neuropatia periférica sensitiva (10%).
Nos últimos anos, a doença de Chagas aguda (DCA) vem apresentando aumento no número de casos tanto por transmissão oral ou vetorial quanto por quadros de reativação da doença em países da América Latina. Os principais meios de infecção da DCA, atualmente, são: transmissão oral (68,4%), vetorial (5,9%), vertical (0,5%), transfusional (0,4%), acidental (0,1%) e desconhecida (24,7%), como descrito em série de casos diagnosticados na Amazônia Brasileira. A transmissão vetorial ocorre pelo hábito de os triatomíneos defecarem durante ou logo após a hematofagia, com a deposição de fezes contaminadas fazendo com que as formas infectantes do Trypanosoma cruzi atinjam a pele, mucosas e, posteriormente, a corrente sanguínea. O período de incubação é de 4 a 15 dias. A transmissão oral ocorre quando há a ingestão de alimentos ou bebidas contaminadas com parasitos. Atualmente, é a causa mais comum da doença aguda, ocasionando surtos em regiões endêmicas e não endêmicas. Seu período de incubação varia de 3 a 22 dias. Os casos de DCA podem cursar com sinais e sintomas inespecíficos de síndrome infecciosa, tais como febre, mialgias, edema de face e artralgias; além de sinais relacionados com a porta de entrada como o chagoma de inoculação e sinal de Romaña na forma vetorial e quadros digestivos, podendo ocorrer hemorragias digestivas na forma oral. Os casos agudos podem ou não cursar com miocardite e pericardite. Relatos de necrópsia mostram intensa inflamação aguda do epicárdico e miocárdio, observando-se atividade inflamatória intensa e difusa e dissociação extensa de fibras cardíacas, sendo observadas as formas amastigotas do parasita. Sinais e sintomas compatíveis com IC variaram de 26% a 58%. Podem ocorrer casos graves com tamponamento cardíaco e choque cardiogênico por disfunção sistólica de VE. A letalidade na forma de transmissão oral variou de 2% a 5% nas maiores séries. A presença de alterações cardíacas em exames complementares variou de 33% a 70% de alterações eletrocardiográficas (bloqueio de ramo direito, BAV de primeiro grau, fibrilação atrial aguda, bloqueio divisional anterossuperior) e de 13% a 52% de alterações ao ecocardiograma, com derrame pericárdico sendo a alteração mais frequente (10% a 82%), e alterações de contração segmentar, comuns na fase crônica, são pouco encontradas na fase aguda. Apesar da ocorrência de casos graves de comprometimento cardíaco, a maioria dos pacientes cursa com função sistólica preservada com poucos casos de redução da fração de ejeção, e a maioria dos óbitos ocorre devido à presença de derrame pericárdico importante e tamponamento cardíaco. ,
Os exames parasitológicos diretos são os mais indicados para o diagnóstico da miocardite aguda. Métodos indiretos, como a hemocultura e o xenodiagnóstico, têm baixa sensibilidade, não sendo ideais para utilização na fase aguda. Os exames sorológicos não são os melhores métodos para diagnóstico na fase aguda, mas podem ser feitos quando os exames parasitológicos diretos forem persistentemente negativos e a suspeita clínica persistir. A pesquisa a fresco do parasita no sangue circulante é rápida e simples, além de ser mais sensível que o esfregaço corado. A condição ideal de coleta é com o paciente ainda febril e dentro de 1 mês do início dos sintomas. Métodos de concentração (Strout, micro-hematócrito, creme leucocitário) são recomendados quando a pesquisa a fresco resultou negativa, por serem mais sensíveis. São empregados também quando o quadro clínico agudo começou há mais de 1 mês. Resultados negativos na primeira análise não devem ser considerados definitivos, principalmente se os sintomas persistirem, a não ser que outra etiologia seja comprovada. A PCR, sendo um método de diagnóstico molecular, vem se tornando mais importante para detectar infecção recente, visto que mostra resultados positivos dias a semanas antes que sejam detectadas tripomastigotas circulantes. - Pode ser feita em sangue periférico e no tecido obtido por BEM para detectar reativação precoce pós-transplante cardíaco, antes do aparecimento do quadro clínico ou de disfunção do enxerto. A reativação da doença de Chagas no período pós-transplante cardíaco pode acontecer em 19,6% a 45% dos casos. O quadro clínico pode ser de miocardite aguda, com vários graus de IC, frequentemente acompanhada de manifestações sistêmicas. Na pele, podem surgir eritema e nódulos subcutâneos, que devem ser biopsiados para pesquisa de ninhos de amastigotas. O monitoramento deve ser rotineiro, mesmo sem suspeita de reagudização. Quando não há sinais clínicos extracardíacos, a biópsia deve ser realizada.
O tratamento tripanosomicida está indicado nos pacientes com DCA com ou sem manifestações de miocardite e na reativação da doença crônica devido à imunossupressão (transplantados) ( ). O benzonidazol é a droga disponível e recomendada para o tratamento da infeção pelo T. cruzi . As informações a respeito desse tema, no entanto, são escassas, baseadas em estudos não randomizados, com número de pacientes e tempo de observação insuficientes. Embora a definição sobre os critérios de cura da doença permaneça controversa, existe um consenso atual de que o tratamento com benzonidazol deve ser realizado nas formas agudas e que existe um provável benefício a longo prazo. A dose de benzonidazol em crianças é de 5 a 10mg/kg por dia, dividindo em duas tomadas, por 60 dias. Em adultos, a dose é de 5mg/kg. Reações adversas ocorrem em aproximadamente 30% dos pacientes, sendo as mais frequentes uma dermatite alérgica (30%) e uma neuropatia periférica sensitiva (10%).
As doenças tropicais são entidades infecciosas geralmente transmitidas por vetores e ocorrem nas regiões tropicais. Há pouca atenção dos governos, e os recursos destinados ao controle dessas doenças são escassos, com acometimento das populações vulneráveis em áreas com saneamento básico inadequado e sistemas de saúde deficitários. A Amazônia brasileira é região endêmica dessas doenças, muito embora outras regiões do país também sejam afetadas. Muitas das doenças tropicais causam miocardite e parecem contribuir para o aumento da carga das doenças cardíacas nos países em desenvolvimento. As doenças tropicais que causam miocardite e são prevalentes no Brasil são malária, dengue, Chikungunya, Zika e febre amarela ( ). Essas doenças devem ser consideradas na investigação das miocardites que ocorrem em áreas endêmicas. A malária é causada pelo protozoário do gênero Plasmodium (no Brasil, as espécies P. vivax e P. falciparum ), transmitido pela picada do mosquito Anopheles . A malária é endêmica na região Amazônica, onde mais de 155 mil casos foram diagnosticados no ano de 2019. O P. falciparum é responsável pelas formas mais graves da doença e tem sido mais associado ao desenvolvimento de miocardite. Estudos de necrópsia de casos de malária grave mostram grande quantidade de parasitas no miocárdio e inflamação compatível com miocardite. A maioria dos estudos que reportam miocardite por malária consiste em séries de casos de pacientes internados, com avaliações de ECG, marcadores de lesão miocárdica e ecocardiograma. Essas séries de casos contemplam casos graves e mostram alteração dos marcadores de lesão cardíaca em até 59% e alterações ecocardiográficas como redução da função sistólica em até 19% dos pacientes avaliados. Muitos estudos que associam a malária ao IAM exibem falhas na definição do desfecho avaliado, sendo provavelmente casos de miocardite descritos como infartos. Nos casos de malária aguda que evoluem com a forma grave da doença, a disfunção miocárdica devido à miocardite por malária deve ser considerada. A avaliação com biomarcadores de lesão miocárdica e a função ventricular devem ser avaliadas para otimização do manejo cardiovascular. As arboviroses são as doenças causadas pelos arbovírus, que incluem o vírus da dengue, Zika, febre Chikungunya e da febre amarela. São transmitidas pela picada do mosquito Aedes aegypti . O envolvimento cardiovascular nas arboviroses vem sendo demonstrado especialmente na dengue, que é a arbovirose mais prevalente no Brasil. A dengue é também aquela que tem maior percentual de manifestações cardiovasculares descritas, com estudos prospectivos relatando que 48% dos pacientes com a forma grave desenvolvem miocardite. Um estudo de necrópsia de quatro casos fatais de dengue mostrou achados de miocardite com presença de edema, hemorragia, infiltrado mononuclear e presença de antígeno e replicação viral. A Chikungunya é, dentre todas as arboviroses aqui mencionadas, a mais sintomática (80% dos casos); no entanto, normalmente se apresenta com sintomas leves e mais relacionados ao sistema osteoarticular. Ainda assim, a infecção pode se apresentar de maneira sistêmica e causar danos generalizados ou em órgãos específicos, como o coração. Um relato de caso em paciente com Chikungunya que desenvolveu dor torácica mostra ressonância com achados típicos de miocardite. Diversas séries de casos em situações de epidemia pelo vírus relatavam percentual de até 37% de acometimento cardiovascular, geralmente quadros compatíveis com micoardite. De todos as infecções tropicais aqui abordadas, a Zika é a que foi descoberta mais recentemente e também é a que apresenta o maior percentual de casos assintomáticos; quando tem manifestação clínica, esta ocorre predominantemente de forma congênita e envolvendo o sistema neurológico. Apesar disso, há alguns poucos estudos longitudinais envolvendo complicações não neurológicas dessa infecção em adultos, nos quais são apresentados desfechos cardiovasculares como IC, arritmias e IAM, bem como relatos de miocardite associada à Zika, , além de estudos prospectivos de Zika congênita em que são relatadas alterações ecocardiográficas sugestivas de dano cardiovascular, sendo que este quadro possivelmente não representa o real impacto na doença no coração, uma vez que não há muitos estudos longitudinais que avaliem isso. A febre amarela é uma arbovirose tropical negligenciada, a qual por muito tempo esteve concentrada apenas no ciclo silvestre, com baixa incidência (pouco notificada) e pouca expansão geográfica, o que contribuiu para que poucos estudos e casos fossem adequadamente relatados, em especial envolvendo o sistema cardiovascular. Ainda assim, com a crescente urbanização dessa doença e a melhor compreensão de seus mecanismos fisiopatológicos, sua relação com o coração vem sendo demonstrada por alguns estudos, entre eles, o estudo PROVAR+, que relatou, respectivamente, percentuais de 48% e 52% de alterações ecocardiográficas e eletrocardiográficas, além de análises post-mortem que isolaram o vírus no tecido cardíaco ou demonstraram dano miocárdico. Portanto, muito embora a associação entre doenças tropicais e miocardite seja baseada em séries de casos e poucos estudos com diagnóstico bem-definido de miocardite, justifica-se a investigação diagnóstica das doenças comuns na região nos casos de miocardites em áreas endêmicas. Para tal, deve-se incluir a pesquisa de antígenos ou sorologias para arboviroses e gota espessa para pesquisa de malária. Nos casos de diagnóstico dessas doenças, um infectologista deve ser consultado para orientar o tratamento específico da malária ou o suporte nos casos de arboviroses. Uma outra situação clínica inclui pacientes com diagnóstico de arbovirose ou malária que evoluem com forma grave, especialmente choque; nesses casos, deve haver avaliação de lesão cardíaca com marcadores de necrose miocárdica e de função miocárdica com ecocardiograma para diagnóstico de acometimento miocárdico (miocardite), e o manejo deve incluir otimização da função miocárdica.
Coronavírus humanos têm sido associados à miocardite. - Entre os seres humanos, durante o surto de SARS de Toronto, o RNA do vírus da SARS-CoV foi detectado em 35% dos corações autopsiados. Isso aumenta a possibilidade de danos diretos de cardiomiócitos pelo vírus - ( ). 6.6.1. Possível Fisiopatologia da Miocardite Relacionada ao SARS-CoV-2 Os mecanismos da lesão miocárdica não estão bem estabelecidos, mas provavelmente envolvem: lesão miocárdica secundária ao desequilíbrio entre oferta e demanda de oxigênio; lesão microvascular; resposta inflamatória sistêmica; cardiomiopatia por estresse; síndrome coronariana aguda não obstrutiva; e lesão miocárdica viral direta ( ). 6.6.2. Lesão Miocárdica Viral Direta Relatos de casos de miocardite na Covid-19 fornecem evidências de inflamação cardíaca, mas não determinam o mecanismo. A infecção por SARS- CoV-2 é causada pela ligação da proteína Spike da superfície viral ao receptor da enzima conversora de angiotensina 2 (ECA-2) humana. No entanto, a proteína spike deve primeiro ser clivada nos locais S1/S2 e, subsequentemente, nos locais S2’ para permitir a ligação à ECA-2. A clivagem no local S1/S2 parece ser mediada pela protease serina 2 transmembrana (TMPRSS2) , ( ). Até o momento, temos apenas um relato de miocardite viral por SARS-CoV- 2 comprovada por biópsia com inclusões virais ou DNA viral detectado no tecido do miocárdio. Contudo, não havia a presença de partículas virais no cardiomiócito, apenas no interior dos macrófagos no interstício cardíaco. Outro mecanismo hipotético de lesão viral direta ao miocárdio é por meio de uma vasculite mediada por infecção. O receptor ECA2 é altamente expresso em artérias e veias endoteliais. Embora a ECA2 seja apenas levemente expressa no cardiomiócito, ela é altamente expressa nos pericitos. A Covid-19 pode atacar pericitos, essenciais para a estabilidade endotelial, causando disfunção endotelial, que leva a distúrbios microcirculatórios. Isso explica por que, embora a ECA2 seja apenas ligeiramente expressa nos cardiomiócitos, Covid-19 pode causar lesão cardíaca. As autópsias mostram infiltrados inflamatórios compostos por macrófagos e, em menor grau, por células T e CD4+. - Esses infiltrados mononucleares estão associados a regiões de necrose de cardiomiócitos que, pelo Critério de Dallas, definem miocardite. 6.6.3. Diagnóstico de Miocardite Relacionada à Covid-19 A apresentação clínica da miocardite por SARS-CoV-2 pode variar desde aqueles com sintomas leves, como fadiga, dispneia e dor precordial; em casos mais graves, podemos ter choque cardiogênico. Os pacientes podem apresentar sinais de IC direita, com aumento da pressão venosa jugular, edema periférico e dor no quadrante superior direito. A apresentação mais emergente é miocardite fulminante, definida como disfunção ventricular e IC dentro de 2 a 3 semanas após a infecção pelo vírus. Os sinais precoces de miocardite fulminante geralmente se assemelham aos da sepse. , - 6.6.4. Laboratório Elevações de troponina e NT-proBNP foram observadas nos casos de miocardite por Covid-19. , , - Valores anormais de troponina são comuns nos pacientes com Covid-19, em especial quando utilizamos troponina cardíaca de alta sensibilidade (hs-cTn). Estudos que avaliaram o curso clínico de pacientes com Covid-19 observaram hs- cTnI detectável na maioria dos pacientes, e hs-cTnI foi significativamente elevado em mais da metade dos pacientes que morreram. , Pacientes com Covid-19 geralmente demonstram elevação significativa do BNP ou NT-proBNP. O significado desse achado é incerto e não deve, necessariamente, desencadear uma avaliação ou tratamento para IC, a menos que haja clara evidência clínica para o diagnóstico. Em pacientes com Covid-19, o nível de BNP (NT-pro) também pode aumentar secundário ao estresse do miocárdio, como possível efeito de doença respiratória grave. Devido à frequência e à natureza inespecífica dos resultados anormais de troponina ou peptídio natriurético entre pacientes com infecção por Covid-19, suas dosagens devem ser realizadas apenas se o diagnóstico de IAM ou IC estiver sendo considerado por motivos clínicos. Um resultado anormal de troponina ou peptídio natriurético não deve ser considerado evidência de IAM ou IC sem evidências corroboradoras. 6.6.5. Eletrocardiograma Alterações no ECG comumente associadas à pericardite, como elevação de ST e depressão de PR, podem ser observadas na miocardite; no entanto, esses achados não são sensíveis para a detecção da doença e sua ausência não é excludente. Por exemplo, uma miocardite relacionada com Covid-19 não mostrou elevação do segmento ST nem depressão PR. Outras anomalias no ECG, incluindo novo bloqueio de ramo, prolongamento do intervalo QT, padrão de pseudoinfarto, extrassístoles ventriculares e bradiarritmia com BAV avançado, podem ser observadas na miocardite. Recentemente, foi publicada uma série de casos de pacientes com diagnóstico de Covid-19 que se apresentaram, em algum momento da infecção, com elevação do segmento ST no ECG. 6.6.6. Imagem A European Society of Cardiology (ESC), em recente documento, aponta as condições que devem ser consideradas diante da necessidade do uso de qualquer método de imagem cardiovascular em pacientes com Covid-19: deve ser utilizada para casos em que venha a determinar uma mudança substancial na conduta, ou quando uma decisão para salvar a vida do paciente esteja em jogo; deve-se usar a modalidade de imagem com a melhor capacidade para atender a essa solicitação, considerando-se sempre a segurança da equipe médica em relação à exposição; exames não urgentes, eletivos ou de rotina devem ser adiados ou até mesmo cancelados. Nesse sentido, a ecocardiografia transtorácica, embora tenha papel central na propedêutica cardiovascular desses pacientes, não deve ser rotineiramente indicada diante da corrente pandemia de Covid-19, sendo criteriosamente utilizada em casos específicos. As recentes recomendações da Society of Cardiovascular Computed Tomography (SCCT) para uso da angio-TC coronariana no contexto da Covid-19 incluem insuficiência cardíaca aguda de causa desconhecida , ( ). O documento da ESC sugere que troponinas positivas, associadas à disfunção miocárdica ou arritmias graves não explicadas por outros métodos, podem ser indicação para RMC, caso o diagnóstico seja crucial para o tratamento e o paciente esteja estável o suficiente para ser transferido com segurança para realização do exame. Nesse contexto, a atual orientação da Society for Cardiovascular of Magnetic Resonance (SCMR) sugere que um exame de RMC deva ser considerado de forma criteriosa e individualizada diante da suspeita de miocardite aguda com implicações imediatas no manejo do paciente. Caso a RMC seja realizada, os resultados devem ser interpretados de acordo com os critérios de Lake Louise: (1) edema; (2) lesão celular irreversível; e (3) hiperemia ou extravasamento capilar ( ). 6.6.7. Biópsia Endomiocárdica Tanto a AHA como a ESC recomendam a BEM para o diagnóstico definitivo de miocardite, mas ambas as sociedades reconhecem suas limitações. , Na era SARS-CoV-2, a utilidade clínica e o papel da BEM, atualmente o padrão-ouro para confirmar o diagnóstico de miocardite, permanecem incertos; além disso, há grande dificuldade na realização de imagens não invasivas, como ecocardiografia e RMC, com medidas adequadas de precaução e isolamento. , Outro ponto a ser considerado é que em, em alguns casos, a infecção por SARS-CoV-2 pode não aparecer inicialmente com sinais e sintomas claros sugestivos de pneumonia intersticial, mas pode aparecer como miocardite sem sintomas respiratórios, às vezes complicada por choque cardiogênico com um curso fulminante. , Adicionalmente, existem poucas evidências sobre o tratamento terapêutico da miocardite associada ao SARSCoV-2. Há um relato de caso em que foi utilizada terapia precoce com glicocorticoides e imunoglobulinas, com benefício para o paciente. Os corticosteroides têm sido utilizados em várias infecções respiratórias virais (influenza, SARS-CoV e MERS-CoV), demonstrando um benefício limitado e, em alguns casos, retardando a depuração viral e aumentando a mortalidade. No entanto, o Grupo de Trabalho da ESC sobre doenças miocárdicas e pericárdicas indica o uso de esteroides em miocardites por doenças autoimunes comprovada, miocardite com vírus negativo somente após determinar a infecção ativa no BEM. É evidente que, na prática real, a BEM nem sempre está disponível ,e seu papel na miocardite relacionada à SARS-CoV-2 ainda é desconhecido. Além disso, na ausência de estudos randomizados multicêntricos, o uso rotineiro de imunoglobulina também não é recomendado. Em conclusão, acreditamos que existem lacunas significativas na avaliação do IAM em pacientes com SARS-CoV-2 que requerem uma análise diagnóstica completa, tratamentos priorizados e, ainda, estratégias mais agressivas, , se necessário, especialmente naqueles que desenvolvem choque cardiogênico durante a miocardite fulminante - ( ).
Os mecanismos da lesão miocárdica não estão bem estabelecidos, mas provavelmente envolvem: lesão miocárdica secundária ao desequilíbrio entre oferta e demanda de oxigênio; lesão microvascular; resposta inflamatória sistêmica; cardiomiopatia por estresse; síndrome coronariana aguda não obstrutiva; e lesão miocárdica viral direta ( ).
Relatos de casos de miocardite na Covid-19 fornecem evidências de inflamação cardíaca, mas não determinam o mecanismo. A infecção por SARS- CoV-2 é causada pela ligação da proteína Spike da superfície viral ao receptor da enzima conversora de angiotensina 2 (ECA-2) humana. No entanto, a proteína spike deve primeiro ser clivada nos locais S1/S2 e, subsequentemente, nos locais S2’ para permitir a ligação à ECA-2. A clivagem no local S1/S2 parece ser mediada pela protease serina 2 transmembrana (TMPRSS2) , ( ). Até o momento, temos apenas um relato de miocardite viral por SARS-CoV- 2 comprovada por biópsia com inclusões virais ou DNA viral detectado no tecido do miocárdio. Contudo, não havia a presença de partículas virais no cardiomiócito, apenas no interior dos macrófagos no interstício cardíaco. Outro mecanismo hipotético de lesão viral direta ao miocárdio é por meio de uma vasculite mediada por infecção. O receptor ECA2 é altamente expresso em artérias e veias endoteliais. Embora a ECA2 seja apenas levemente expressa no cardiomiócito, ela é altamente expressa nos pericitos. A Covid-19 pode atacar pericitos, essenciais para a estabilidade endotelial, causando disfunção endotelial, que leva a distúrbios microcirculatórios. Isso explica por que, embora a ECA2 seja apenas ligeiramente expressa nos cardiomiócitos, Covid-19 pode causar lesão cardíaca. As autópsias mostram infiltrados inflamatórios compostos por macrófagos e, em menor grau, por células T e CD4+. - Esses infiltrados mononucleares estão associados a regiões de necrose de cardiomiócitos que, pelo Critério de Dallas, definem miocardite.
A apresentação clínica da miocardite por SARS-CoV-2 pode variar desde aqueles com sintomas leves, como fadiga, dispneia e dor precordial; em casos mais graves, podemos ter choque cardiogênico. Os pacientes podem apresentar sinais de IC direita, com aumento da pressão venosa jugular, edema periférico e dor no quadrante superior direito. A apresentação mais emergente é miocardite fulminante, definida como disfunção ventricular e IC dentro de 2 a 3 semanas após a infecção pelo vírus. Os sinais precoces de miocardite fulminante geralmente se assemelham aos da sepse. , -
Elevações de troponina e NT-proBNP foram observadas nos casos de miocardite por Covid-19. , , - Valores anormais de troponina são comuns nos pacientes com Covid-19, em especial quando utilizamos troponina cardíaca de alta sensibilidade (hs-cTn). Estudos que avaliaram o curso clínico de pacientes com Covid-19 observaram hs- cTnI detectável na maioria dos pacientes, e hs-cTnI foi significativamente elevado em mais da metade dos pacientes que morreram. , Pacientes com Covid-19 geralmente demonstram elevação significativa do BNP ou NT-proBNP. O significado desse achado é incerto e não deve, necessariamente, desencadear uma avaliação ou tratamento para IC, a menos que haja clara evidência clínica para o diagnóstico. Em pacientes com Covid-19, o nível de BNP (NT-pro) também pode aumentar secundário ao estresse do miocárdio, como possível efeito de doença respiratória grave. Devido à frequência e à natureza inespecífica dos resultados anormais de troponina ou peptídio natriurético entre pacientes com infecção por Covid-19, suas dosagens devem ser realizadas apenas se o diagnóstico de IAM ou IC estiver sendo considerado por motivos clínicos. Um resultado anormal de troponina ou peptídio natriurético não deve ser considerado evidência de IAM ou IC sem evidências corroboradoras.
Alterações no ECG comumente associadas à pericardite, como elevação de ST e depressão de PR, podem ser observadas na miocardite; no entanto, esses achados não são sensíveis para a detecção da doença e sua ausência não é excludente. Por exemplo, uma miocardite relacionada com Covid-19 não mostrou elevação do segmento ST nem depressão PR. Outras anomalias no ECG, incluindo novo bloqueio de ramo, prolongamento do intervalo QT, padrão de pseudoinfarto, extrassístoles ventriculares e bradiarritmia com BAV avançado, podem ser observadas na miocardite. Recentemente, foi publicada uma série de casos de pacientes com diagnóstico de Covid-19 que se apresentaram, em algum momento da infecção, com elevação do segmento ST no ECG.
A European Society of Cardiology (ESC), em recente documento, aponta as condições que devem ser consideradas diante da necessidade do uso de qualquer método de imagem cardiovascular em pacientes com Covid-19: deve ser utilizada para casos em que venha a determinar uma mudança substancial na conduta, ou quando uma decisão para salvar a vida do paciente esteja em jogo; deve-se usar a modalidade de imagem com a melhor capacidade para atender a essa solicitação, considerando-se sempre a segurança da equipe médica em relação à exposição; exames não urgentes, eletivos ou de rotina devem ser adiados ou até mesmo cancelados. Nesse sentido, a ecocardiografia transtorácica, embora tenha papel central na propedêutica cardiovascular desses pacientes, não deve ser rotineiramente indicada diante da corrente pandemia de Covid-19, sendo criteriosamente utilizada em casos específicos. As recentes recomendações da Society of Cardiovascular Computed Tomography (SCCT) para uso da angio-TC coronariana no contexto da Covid-19 incluem insuficiência cardíaca aguda de causa desconhecida , ( ). O documento da ESC sugere que troponinas positivas, associadas à disfunção miocárdica ou arritmias graves não explicadas por outros métodos, podem ser indicação para RMC, caso o diagnóstico seja crucial para o tratamento e o paciente esteja estável o suficiente para ser transferido com segurança para realização do exame. Nesse contexto, a atual orientação da Society for Cardiovascular of Magnetic Resonance (SCMR) sugere que um exame de RMC deva ser considerado de forma criteriosa e individualizada diante da suspeita de miocardite aguda com implicações imediatas no manejo do paciente. Caso a RMC seja realizada, os resultados devem ser interpretados de acordo com os critérios de Lake Louise: (1) edema; (2) lesão celular irreversível; e (3) hiperemia ou extravasamento capilar ( ).
Tanto a AHA como a ESC recomendam a BEM para o diagnóstico definitivo de miocardite, mas ambas as sociedades reconhecem suas limitações. , Na era SARS-CoV-2, a utilidade clínica e o papel da BEM, atualmente o padrão-ouro para confirmar o diagnóstico de miocardite, permanecem incertos; além disso, há grande dificuldade na realização de imagens não invasivas, como ecocardiografia e RMC, com medidas adequadas de precaução e isolamento. , Outro ponto a ser considerado é que em, em alguns casos, a infecção por SARS-CoV-2 pode não aparecer inicialmente com sinais e sintomas claros sugestivos de pneumonia intersticial, mas pode aparecer como miocardite sem sintomas respiratórios, às vezes complicada por choque cardiogênico com um curso fulminante. , Adicionalmente, existem poucas evidências sobre o tratamento terapêutico da miocardite associada ao SARSCoV-2. Há um relato de caso em que foi utilizada terapia precoce com glicocorticoides e imunoglobulinas, com benefício para o paciente. Os corticosteroides têm sido utilizados em várias infecções respiratórias virais (influenza, SARS-CoV e MERS-CoV), demonstrando um benefício limitado e, em alguns casos, retardando a depuração viral e aumentando a mortalidade. No entanto, o Grupo de Trabalho da ESC sobre doenças miocárdicas e pericárdicas indica o uso de esteroides em miocardites por doenças autoimunes comprovada, miocardite com vírus negativo somente após determinar a infecção ativa no BEM. É evidente que, na prática real, a BEM nem sempre está disponível ,e seu papel na miocardite relacionada à SARS-CoV-2 ainda é desconhecido. Além disso, na ausência de estudos randomizados multicêntricos, o uso rotineiro de imunoglobulina também não é recomendado. Em conclusão, acreditamos que existem lacunas significativas na avaliação do IAM em pacientes com SARS-CoV-2 que requerem uma análise diagnóstica completa, tratamentos priorizados e, ainda, estratégias mais agressivas, , se necessário, especialmente naqueles que desenvolvem choque cardiogênico durante a miocardite fulminante - ( ).
6.7.1. Agentes Antineoplásicos Indutores de Cardiotoxidade Aguda A evolução do tratamento do câncer nas últimas décadas resultou em melhora da sobrevida e da qualidade de vida dos pacientes. Entretanto, simultaneamente, com o aumento da longevidade, os fatores de risco cardiovasculares incidem por mais tempo e, associados a esse fato, adiciona-se o potencial risco de lesão ao sistema cardiovascular induzido pela quimioterapia, radioterapia e imunoterapia. Estudos recentes demonstram que há dois períodos de maior ocorrência de doença cardiovascular no paciente oncológico: o primeiro ano após o diagnóstico e os anos após a cura, nos quais denominamos os pacientes como sobreviventes, grupo este que demonstra aumento significativo de mortalidade cardiovascular. , Dentre as toxicidades emergentes, destaca-se a miocardite. Mais recentemente, a miocardite relacionada ao tratamento do câncer ganhou importância devido à evolução da imunoterapia, mais especificamente relacionada aos inibidores de checkpoint imunológico (ICIs). , Contudo, ela tem o potencial de estar associada a qualquer terapia que module o sistema imunológico. Identificar a miocardite nos ensaios clínicos em oncologia é desafiador, dada sua relativa baixa incidência e alta taxa de mortalidade. Devemos ressaltar que as recomendações a seguir são advindas de consensos de especialistas, dada a escassez de dados científicos a respeito do tema. O modelo clássico de cardiotoxicidade é a disfunção ventricular causada pelas antraciclinas. As antraciclinas são uma das classes mais utilizadas de quimioterápicos ainda nos dias de hoje. IC ocorre em até 30% dos pacientes, habitualmente após meses de tratamento, relacionada à dose cumulativa acima de 300 mg/m 2 . Na maioria dos casos, manifesta-se de forma subaguda ou crônica, após meses e anos do tratamento, com a irreversibilidade sendo sua característica predominante. A miocardite aguda relacionada às antraciclinas é manifestação rara, não apresentando relação com dose, sendo reversível na maioria dos casos. O mecanismo de ação da toxicidade está diretamente ligado ao estresse oxidativo consequente a sua metabolização, além da inibição da topoisomerase IIb, que, em última instância, resulta em dano ao DNA do cardiomiócito, por disfunção mitocondrial e apoptose. A ciclofosfamida é um agente alquilante tipo mostarda nitrogenada, que usualmente é parte de regimes de quimioterápicos que envolvem o uso concomitante de antraciclinas. Pode resultar em toxicidade aguda do tipo miocardite aguda hemorrágica e multifocal, caracterizada por endotelite, capilarite hemorrágica e trombogênese. Os inibidores de checkpoint imunológicos (ICI) são o modelo atualmente mais estudado como indutor de miocardite, sendo os mais comumente utilizados o nivolumabe, o durvalimabe, o ipilimumabe, o pembrolizumabe e o atezolizumabe. Essa terapia significou uma revolução no tratamento do câncer nos últimos anos, melhorando a sobrevida dos pacientes com câncer de pulmão, câncer de cabeça e pescoço, carcinoma renal, melanoma, entre outros. O mecanismo de ação se dá pelo bloqueio da apoptose dos linfócitos T (anti-CTLA4, anti-PD1, anti-PDL1), culminando na ativação dos linfócitos por todo o organismo. Se isso, por um lado, reativa o linfócito e a imunidade antitumoral, por outro lado, os linfócitos T ativados podem desencadear miocardite grave, fatal em até 50% dos casos. Clinicamente, manifesta-se em torno de 0,2% dos pacientes, em média 30 a 90 dias após o início do tratamento. , 6.7.2. Diagnóstico da Cardiotoxidade Aguda A miocardite no paciente com câncer deve ser diagnosticada em situações de condições cardíacas sem diagnóstico primário alternativo (p. ex., síndrome coronariana aguda, traumatismo etc.). A história clínica deve considerar o regime de droga, o tempo do tratamento, assim como a dose e outras comorbidades. O diagnóstico laboratorial inclui a dosagem de biomarcadores como troponina ultrassensível e NT-proBNP. No caso da miocardite por imunoterápico, a dosagem de CPK também é recomendada pela associação com miosite em até 20% dos casos. O ECG pode ser útil para confirmar a suspeita de miocardite. Alterações comuns são arritmias ventriculares, alterações de ST-T, alterações do segmento PR, bradicardias e bloqueios. O ecocardiograma é o exame de escolha para a abordagem diagnóstica da miocardite. É realizado no início e na evolução, acessando função de maneira evolutiva. Os achados mais comuns incluem disfunção sistólica difusa, anormalidades segmentares, alterações na esfericidade do ventrículo, espessamento de parede, derrame pericárdico e alterações no strain . A RM é a modalidade de imagem de maior sensibilidade para o diagnóstico de miocardite, também tendo efeito de determinar o prognóstico. A combinação de achados da RMtem sido denominada Critérios de Lake Louise para o diagnóstico de miocardite aguda. Muitos avanços ocorreram no diagnóstico de miocardite por ressonância, e incluem avanços na caracterização tecidual por meio do MAPT1 e MAPT2 e cálcio do volume extracelular. A BEM pode ser considerada para investigação da miocardite relacionada a quimioterápicos e imunoterápicos. Especialistas recomendam, sempre que possível, a realização da biópsia, pois, em muitos casos, antes de manifestação clínica expressiva, os achados anatomopatológicos já exprimem a gravidade das alterações patogênicas da miocardite do câncer. A seguir, descrevemos os principais agentes antineoplásicos com potencial de induzir a miocardite com disfunção miocárdica ( ). 6.7.3. Tratamento da Cardiotoxidade Aguda Após a suspeita do diagnóstico, o tratamento deve ser iniciado imediatamente, pois o tempo pode ser importante na determinação do curso da doença. Embora não haja grandes estudos prospectivos para orientar o tratamento na MICI, a imunossupressão é a pedra angular do tratamento. Os esteroides intravenosos são amplamente utilizados nos eventos adversos relacionados à imunoterapia (EAri) e podem ser eficazes na MICI. Altas doses de corticosteroides (p. ex., 1.000 mg por dia de metilprednisona por 3 dias, seguidas de 1 mg/kg de prednisona) são amplamente utilizadas e podem estar associadas a melhores resultados. Mahmood et al. relataram que 31% dos 35 pacientes receberam corticosteroides, e que altas doses foram associadas a níveis mais baixos de pico de troponina e menores taxas de MACE em comparação com doses reduzidas do corticoide. A Sociedade Americana de Oncologia Clínica (ASCO) recomenda 1 mg/kg de corticosteroides como dose inicial. A duração dos esteroides não é clara, mas a ASCO recomenda uma redução durante 4 a 6 semanas em pacientes com EAri. Os biomarcadores cardíacos séricos (p. ex., troponinas, BNP) podem ser úteis para definir a necessidade de maior duração após o desmame. Imunossupressão adicional também pode ser usada. Evidências anedóticas sugerem que pode ser eficazes outros imunossupressores, tais como imunoglobulina intravenosa, infliximab, micofenolato, tacrolimus, globulina antitimocítica, , plasmaférese, abatacept e alemtuzumab . No estudo de Mahmood et al., um pequeno número de pacientes recebeu outros imunossupressores não esteroides Dada a falta de dados robustos sobre sua eficácia no MICI, tais agentes geralmente são reservados para MICI refratário ou muito grave. Sugerimos considerar a adição de imunossupressão não esteroide em pacientes que não demonstrem melhora sintomática, funcional ou de biomarcadores dentro de 24 a 48 horas após o início do corticosteroide. A escolha do segundo agente não é identificada, mas pode ser motivada pela disponibilidade e contraindicações. Vários e sequenciados imunossupressores podem ser necessários para alcançar remissão. Recomendamos o início de altas doses de esteroides intravenosos no momento do diagnóstico de MICI (metilprednisona 1 mg/kg /dia). Os biomarcadores cardíacos (troponina e BNP) devem ser verificados em série. Se os biomarcadores cardíacos continuarem a aumentar, apesar da alta dose de esteroides, a plasmaférese deve ser iniciada. Um imunossupressor adicional deve ser adicionado se os biomarcadores cardíacos continuarem a aumentar ou se houver associação ou piora de arritmias ou IC ( ). A escolha do imunossupressor depende da experiência local e das comorbidades coexistentes ( ). Recomendamos a administração de uma dose única de infliximabe (5 mg/kg) se não houver contraindicações (por exemplo, tuberculose, hepatite). Alternativamente, podem ser usadas globulina antitimócita (10 a 30 mg/kg), alemtuzumabe (30 mg uma vez) ou abatacept (500 mg). Entre 3 a 5 dias após o início do corticosteroide, a função ventricular deve ser examinada (por ecocardiografia ou RMC). Pacientes que mostrem melhora significativa na função do VE (melhora da FEVE de pelo menos 5%) podem ser transferidos para corticosteroide oral (prednisona 40 a 60 mg por dia) por um longo tempo (4 a 8 semanas). Se os biomarcadores diminuírem e o paciente demonstrar resposta clínica, MMF ou tacrolimus podem ser utilizados para encurtar a cronicidade de esteroides. Dada a alta mortalidade e morbidade com o MICI, o ICI deve ser descontinuado mesmo em pacientes com cardiotoxicidade leve ( ). Dada a potencial reversibilidade do MICI, as terapias de suporte podem ser instituídas após cuidadosa consideração multidisciplinar do status da malignidade subjacente e do potencial de recuperação. Estratégias de suporte podem incluir suporte inotrópico, marca-passo temporário ou permanente, suporte circulatório mecânico temporário (p. ex., bomba de balão intra-aórtico, dispositivos de assistência ventricular percutânea ou oxigenação extracorpórea por membrana [ECMO]). , Uma avaliação cuidadosa do VD deve ser feita antes do início dos dispositivos de assistência do VE, pois MICI tem alta probabilidade de afetar o VD, , , o que pode exigir suporte biventricular. , Ademais, devido ao ambiente protrombótico induzido pela neoplasia subjacente e EAri, é essencial excluir trombos do VE com RMC ou ecocardiograma com contraste, antes da inserção de dispositivos percutâneos de assistência do VE. A terapia medicamentosa para IC deve ser iniciada conforme tolerado. Isso inclui bloqueadores da angiotensina (ACE, ARB, ARNi), betabloqueadores e antagonistas dos mineralocorticoides (p. ex. espironolactona). A segurança de reiniciar o tratamento com ICI após a resolução da miocardite não é conhecida. Em um estudo com 40 pacientes que desenvolveram EAri (1 MICI) nos quais ICI for reintroduzido (43% com o mesmo agente), 22 (55%) desenvolveram EAri recorrentes em um seguimento de 14 meses. Extrapolando esses dados para o MICI, e dada a alta probabilidade de recorrência de EAri com a reintrodução, a ASCO recomenda a descontinuação permanente do ICI em todos os casos de MICI. Há relato de reintrodução bem-sucedida em um caso de miocardite leve, e a reintrodução de ICI pode ser tentada em casos selecionados de MICI leve e assintomática (grau I), especialmente com ICI de baixo risco com pembrolizumab. No entanto, essa recomendação permanece controversa. 6.7.4 Prognóstico O prognóstico da MICI é difícil de definir devido à sua rara ocorrência. Em um registro multicêntrico de 35 pacientes com MICI, quase metade (n = 16) desenvolveu eventos cardiovasculares adversos importantes ao longo de um período de 102 dias (6 mortes cardiovasculares, 3 choques cardiogênicos, 4 paradas cardíacas, 3 bloqueios cardíacos completos). , Em um registro francês de 30 pacientes com MICI em dois centros, oito pacientes morreram de complicações cardiovasculares. Um estudo recente que acompanhou 101 pacientes com MICI mostrou uma taxa de MACE de 51% durante um seguimento de 162 dias. Entre 250 pacientes com MICI relatados ao Sistema de Notificação de Eventos Adversos da Administração Federal de Medicamentos dos EUA (FAERS), a taxa de mortalidade foi de 50%. Não houve diferença na taxa de fatalidade por idade, sexo, ano de notificação ou tipo de ICI (proteína de morte celular antiprogramada-1/ligante de morte celular programada-1 vs. proteína de linfócito T anticitotóxica-4). Mahmood et al. descobriram que pacientes com MICI e troponina elevada na ocasião de alta hospitalar apresentaram taxas significativamente mais altas de MACE (troponina T de alta de ≥1,5 ng/mL: HR 4,0; IC95%: 1,5-10,9; p=0,003). Escudier et al. relataram que 80% dos pacientes com MICI e doença de condução tiveram morte cardiovascular. Um estudo recente de pacientes com MICI relatou que o strain longitudinal global (GLS) obtido ao diagnóstico de MICI estava fortemente associado a MACE ao longo de um seguimento de 162 dias. Dado o pequeno número de pacientes nesses estudos, é difícil identificar fatores de risco para mau prognóstico em pacientes que apresentam MICI. - No geral, as taxas de recuperação com a terapia apropriada foram substanciais. Um total de 67% dos pacientes que receberam esteroides teve recuperação da função do VE no registro francês do MICI. Recuperação também foi descrita mesmo em pacientes com MICI fulminante que necessitaram de suporte hemodinâmico mecânico. - 6.7.5. Prevenção A maioria dos estudos publicados na prevenção da cardiotoxicidade induzida pela quimioterapia baseia-se nas antraciclinas e nos agentes anti-HER2. A prevenção da cardiotoxicidade deve iniciar-se antes do tratamento do câncer, com uma avaliação do risco cardiovascular do paciente e uma interação entre o cardiologista e o oncologista, a fim de programar a melhor abordagem durante o tratamento oncológico. Os pacientes sob maior risco de desenvolver a cardiotoxicidade são aqueles que apresentam os fatores de risco clássicos para doença cardiovascular (hipertensão arterial, diabetes melito, dislipidemia, tabagismo, obesidade, sedentarismo, entre outros) ou aqueles sob maior exposição a fármacos cardiotóxicos (doses cumulativas altas para antraciclinas, associações de fármacos cardiotóxicas e antecedente de quimioterapia ou radioterapia). , As principais recomendações para prevenção da cardiotoxicidade estão descritas na . Entre as medicações cardioprotetoras, o dexrazoxano, um quelante de ferro, é a única medicação aprovada para a prevenção da cardiotoxicidade. Seu efeito contra a cardiotoxicidade por antraciclinas já foi comprovado em diversos estudos tanto na população adulta como na pediátrica. - As limitações para o uso do dexrazoxano é o custo elevado e alguns potenciais efeitos adversos, tais como interferência na eficácia das antraciclinas, risco de desenvolvimento de tumores secundários (evidência controversa) , e toxicidade medular. Seu uso está indicado em adultos com câncer de mama com estágio avançado ou metastático que receberam uma dose cumulativa prévia de 300 mg/m 2 de doxorrubicina, 540 mg/m 2 de epirrubicina, quando necessário a continuidade do tratamento com antraciclinas. O uso de drogas cardiovasculares como betabloqueadores, IECA e bloqueadores do receptor da angiotensina (BRA) na prevenção da cardiotoxicidade secundária às antraciclinas é controverso e se baseia em poucos ensaios clínicos. - Algumas evidências demonstraram benefícios dos betabloqueadores e IECA em pacientes que utilizaram doses cumulativas de antraciclinas elevadas ou em pacientes de alto risco, com troponina positiva durante a quimioterapia. , Em doses cumulativas de antraciclina mais baixas, esse benefício não foi evidênciado com betabloqueadores, , mas houve uma discreta prevenção com o uso de BRA. O ensaio clínico CECCY, um estudo brasileiro, testou o uso de betabloqueadores para prevenção primária da cardiotoxicidade por antraciclinas, não demonstrou benefício do uso do carvedilol na prevenção relacionada a antraciclinas. Entretanto, o carvedilol esteve associado a valores atenuados de troponina e menor porcentagem de pacientes com aparecimento de disfunção diastólica. Em relação ao uso do trastuzumabe, alguns estudos também apontam benefício para o uso de drogas cardiovasculares tanto na prevenção da cardiotoxicidade , quanto após o aparecimento da cardiotoxicidade, auxiliando na recuperação da disfunção ventricular. A decisão da suspensão do tratamento quimioterápico, bem como seu retorno, deve ser feita em conjunto, pesando o risco e o benefício da manutenção do tratamento oncológico.
A evolução do tratamento do câncer nas últimas décadas resultou em melhora da sobrevida e da qualidade de vida dos pacientes. Entretanto, simultaneamente, com o aumento da longevidade, os fatores de risco cardiovasculares incidem por mais tempo e, associados a esse fato, adiciona-se o potencial risco de lesão ao sistema cardiovascular induzido pela quimioterapia, radioterapia e imunoterapia. Estudos recentes demonstram que há dois períodos de maior ocorrência de doença cardiovascular no paciente oncológico: o primeiro ano após o diagnóstico e os anos após a cura, nos quais denominamos os pacientes como sobreviventes, grupo este que demonstra aumento significativo de mortalidade cardiovascular. , Dentre as toxicidades emergentes, destaca-se a miocardite. Mais recentemente, a miocardite relacionada ao tratamento do câncer ganhou importância devido à evolução da imunoterapia, mais especificamente relacionada aos inibidores de checkpoint imunológico (ICIs). , Contudo, ela tem o potencial de estar associada a qualquer terapia que module o sistema imunológico. Identificar a miocardite nos ensaios clínicos em oncologia é desafiador, dada sua relativa baixa incidência e alta taxa de mortalidade. Devemos ressaltar que as recomendações a seguir são advindas de consensos de especialistas, dada a escassez de dados científicos a respeito do tema. O modelo clássico de cardiotoxicidade é a disfunção ventricular causada pelas antraciclinas. As antraciclinas são uma das classes mais utilizadas de quimioterápicos ainda nos dias de hoje. IC ocorre em até 30% dos pacientes, habitualmente após meses de tratamento, relacionada à dose cumulativa acima de 300 mg/m 2 . Na maioria dos casos, manifesta-se de forma subaguda ou crônica, após meses e anos do tratamento, com a irreversibilidade sendo sua característica predominante. A miocardite aguda relacionada às antraciclinas é manifestação rara, não apresentando relação com dose, sendo reversível na maioria dos casos. O mecanismo de ação da toxicidade está diretamente ligado ao estresse oxidativo consequente a sua metabolização, além da inibição da topoisomerase IIb, que, em última instância, resulta em dano ao DNA do cardiomiócito, por disfunção mitocondrial e apoptose. A ciclofosfamida é um agente alquilante tipo mostarda nitrogenada, que usualmente é parte de regimes de quimioterápicos que envolvem o uso concomitante de antraciclinas. Pode resultar em toxicidade aguda do tipo miocardite aguda hemorrágica e multifocal, caracterizada por endotelite, capilarite hemorrágica e trombogênese. Os inibidores de checkpoint imunológicos (ICI) são o modelo atualmente mais estudado como indutor de miocardite, sendo os mais comumente utilizados o nivolumabe, o durvalimabe, o ipilimumabe, o pembrolizumabe e o atezolizumabe. Essa terapia significou uma revolução no tratamento do câncer nos últimos anos, melhorando a sobrevida dos pacientes com câncer de pulmão, câncer de cabeça e pescoço, carcinoma renal, melanoma, entre outros. O mecanismo de ação se dá pelo bloqueio da apoptose dos linfócitos T (anti-CTLA4, anti-PD1, anti-PDL1), culminando na ativação dos linfócitos por todo o organismo. Se isso, por um lado, reativa o linfócito e a imunidade antitumoral, por outro lado, os linfócitos T ativados podem desencadear miocardite grave, fatal em até 50% dos casos. Clinicamente, manifesta-se em torno de 0,2% dos pacientes, em média 30 a 90 dias após o início do tratamento. ,
A miocardite no paciente com câncer deve ser diagnosticada em situações de condições cardíacas sem diagnóstico primário alternativo (p. ex., síndrome coronariana aguda, traumatismo etc.). A história clínica deve considerar o regime de droga, o tempo do tratamento, assim como a dose e outras comorbidades. O diagnóstico laboratorial inclui a dosagem de biomarcadores como troponina ultrassensível e NT-proBNP. No caso da miocardite por imunoterápico, a dosagem de CPK também é recomendada pela associação com miosite em até 20% dos casos. O ECG pode ser útil para confirmar a suspeita de miocardite. Alterações comuns são arritmias ventriculares, alterações de ST-T, alterações do segmento PR, bradicardias e bloqueios. O ecocardiograma é o exame de escolha para a abordagem diagnóstica da miocardite. É realizado no início e na evolução, acessando função de maneira evolutiva. Os achados mais comuns incluem disfunção sistólica difusa, anormalidades segmentares, alterações na esfericidade do ventrículo, espessamento de parede, derrame pericárdico e alterações no strain . A RM é a modalidade de imagem de maior sensibilidade para o diagnóstico de miocardite, também tendo efeito de determinar o prognóstico. A combinação de achados da RMtem sido denominada Critérios de Lake Louise para o diagnóstico de miocardite aguda. Muitos avanços ocorreram no diagnóstico de miocardite por ressonância, e incluem avanços na caracterização tecidual por meio do MAPT1 e MAPT2 e cálcio do volume extracelular. A BEM pode ser considerada para investigação da miocardite relacionada a quimioterápicos e imunoterápicos. Especialistas recomendam, sempre que possível, a realização da biópsia, pois, em muitos casos, antes de manifestação clínica expressiva, os achados anatomopatológicos já exprimem a gravidade das alterações patogênicas da miocardite do câncer. A seguir, descrevemos os principais agentes antineoplásicos com potencial de induzir a miocardite com disfunção miocárdica ( ).
Após a suspeita do diagnóstico, o tratamento deve ser iniciado imediatamente, pois o tempo pode ser importante na determinação do curso da doença. Embora não haja grandes estudos prospectivos para orientar o tratamento na MICI, a imunossupressão é a pedra angular do tratamento. Os esteroides intravenosos são amplamente utilizados nos eventos adversos relacionados à imunoterapia (EAri) e podem ser eficazes na MICI. Altas doses de corticosteroides (p. ex., 1.000 mg por dia de metilprednisona por 3 dias, seguidas de 1 mg/kg de prednisona) são amplamente utilizadas e podem estar associadas a melhores resultados. Mahmood et al. relataram que 31% dos 35 pacientes receberam corticosteroides, e que altas doses foram associadas a níveis mais baixos de pico de troponina e menores taxas de MACE em comparação com doses reduzidas do corticoide. A Sociedade Americana de Oncologia Clínica (ASCO) recomenda 1 mg/kg de corticosteroides como dose inicial. A duração dos esteroides não é clara, mas a ASCO recomenda uma redução durante 4 a 6 semanas em pacientes com EAri. Os biomarcadores cardíacos séricos (p. ex., troponinas, BNP) podem ser úteis para definir a necessidade de maior duração após o desmame. Imunossupressão adicional também pode ser usada. Evidências anedóticas sugerem que pode ser eficazes outros imunossupressores, tais como imunoglobulina intravenosa, infliximab, micofenolato, tacrolimus, globulina antitimocítica, , plasmaférese, abatacept e alemtuzumab . No estudo de Mahmood et al., um pequeno número de pacientes recebeu outros imunossupressores não esteroides Dada a falta de dados robustos sobre sua eficácia no MICI, tais agentes geralmente são reservados para MICI refratário ou muito grave. Sugerimos considerar a adição de imunossupressão não esteroide em pacientes que não demonstrem melhora sintomática, funcional ou de biomarcadores dentro de 24 a 48 horas após o início do corticosteroide. A escolha do segundo agente não é identificada, mas pode ser motivada pela disponibilidade e contraindicações. Vários e sequenciados imunossupressores podem ser necessários para alcançar remissão. Recomendamos o início de altas doses de esteroides intravenosos no momento do diagnóstico de MICI (metilprednisona 1 mg/kg /dia). Os biomarcadores cardíacos (troponina e BNP) devem ser verificados em série. Se os biomarcadores cardíacos continuarem a aumentar, apesar da alta dose de esteroides, a plasmaférese deve ser iniciada. Um imunossupressor adicional deve ser adicionado se os biomarcadores cardíacos continuarem a aumentar ou se houver associação ou piora de arritmias ou IC ( ). A escolha do imunossupressor depende da experiência local e das comorbidades coexistentes ( ). Recomendamos a administração de uma dose única de infliximabe (5 mg/kg) se não houver contraindicações (por exemplo, tuberculose, hepatite). Alternativamente, podem ser usadas globulina antitimócita (10 a 30 mg/kg), alemtuzumabe (30 mg uma vez) ou abatacept (500 mg). Entre 3 a 5 dias após o início do corticosteroide, a função ventricular deve ser examinada (por ecocardiografia ou RMC). Pacientes que mostrem melhora significativa na função do VE (melhora da FEVE de pelo menos 5%) podem ser transferidos para corticosteroide oral (prednisona 40 a 60 mg por dia) por um longo tempo (4 a 8 semanas). Se os biomarcadores diminuírem e o paciente demonstrar resposta clínica, MMF ou tacrolimus podem ser utilizados para encurtar a cronicidade de esteroides. Dada a alta mortalidade e morbidade com o MICI, o ICI deve ser descontinuado mesmo em pacientes com cardiotoxicidade leve ( ). Dada a potencial reversibilidade do MICI, as terapias de suporte podem ser instituídas após cuidadosa consideração multidisciplinar do status da malignidade subjacente e do potencial de recuperação. Estratégias de suporte podem incluir suporte inotrópico, marca-passo temporário ou permanente, suporte circulatório mecânico temporário (p. ex., bomba de balão intra-aórtico, dispositivos de assistência ventricular percutânea ou oxigenação extracorpórea por membrana [ECMO]). , Uma avaliação cuidadosa do VD deve ser feita antes do início dos dispositivos de assistência do VE, pois MICI tem alta probabilidade de afetar o VD, , , o que pode exigir suporte biventricular. , Ademais, devido ao ambiente protrombótico induzido pela neoplasia subjacente e EAri, é essencial excluir trombos do VE com RMC ou ecocardiograma com contraste, antes da inserção de dispositivos percutâneos de assistência do VE. A terapia medicamentosa para IC deve ser iniciada conforme tolerado. Isso inclui bloqueadores da angiotensina (ACE, ARB, ARNi), betabloqueadores e antagonistas dos mineralocorticoides (p. ex. espironolactona). A segurança de reiniciar o tratamento com ICI após a resolução da miocardite não é conhecida. Em um estudo com 40 pacientes que desenvolveram EAri (1 MICI) nos quais ICI for reintroduzido (43% com o mesmo agente), 22 (55%) desenvolveram EAri recorrentes em um seguimento de 14 meses. Extrapolando esses dados para o MICI, e dada a alta probabilidade de recorrência de EAri com a reintrodução, a ASCO recomenda a descontinuação permanente do ICI em todos os casos de MICI. Há relato de reintrodução bem-sucedida em um caso de miocardite leve, e a reintrodução de ICI pode ser tentada em casos selecionados de MICI leve e assintomática (grau I), especialmente com ICI de baixo risco com pembrolizumab. No entanto, essa recomendação permanece controversa.
O prognóstico da MICI é difícil de definir devido à sua rara ocorrência. Em um registro multicêntrico de 35 pacientes com MICI, quase metade (n = 16) desenvolveu eventos cardiovasculares adversos importantes ao longo de um período de 102 dias (6 mortes cardiovasculares, 3 choques cardiogênicos, 4 paradas cardíacas, 3 bloqueios cardíacos completos). , Em um registro francês de 30 pacientes com MICI em dois centros, oito pacientes morreram de complicações cardiovasculares. Um estudo recente que acompanhou 101 pacientes com MICI mostrou uma taxa de MACE de 51% durante um seguimento de 162 dias. Entre 250 pacientes com MICI relatados ao Sistema de Notificação de Eventos Adversos da Administração Federal de Medicamentos dos EUA (FAERS), a taxa de mortalidade foi de 50%. Não houve diferença na taxa de fatalidade por idade, sexo, ano de notificação ou tipo de ICI (proteína de morte celular antiprogramada-1/ligante de morte celular programada-1 vs. proteína de linfócito T anticitotóxica-4). Mahmood et al. descobriram que pacientes com MICI e troponina elevada na ocasião de alta hospitalar apresentaram taxas significativamente mais altas de MACE (troponina T de alta de ≥1,5 ng/mL: HR 4,0; IC95%: 1,5-10,9; p=0,003). Escudier et al. relataram que 80% dos pacientes com MICI e doença de condução tiveram morte cardiovascular. Um estudo recente de pacientes com MICI relatou que o strain longitudinal global (GLS) obtido ao diagnóstico de MICI estava fortemente associado a MACE ao longo de um seguimento de 162 dias. Dado o pequeno número de pacientes nesses estudos, é difícil identificar fatores de risco para mau prognóstico em pacientes que apresentam MICI. - No geral, as taxas de recuperação com a terapia apropriada foram substanciais. Um total de 67% dos pacientes que receberam esteroides teve recuperação da função do VE no registro francês do MICI. Recuperação também foi descrita mesmo em pacientes com MICI fulminante que necessitaram de suporte hemodinâmico mecânico. -
A maioria dos estudos publicados na prevenção da cardiotoxicidade induzida pela quimioterapia baseia-se nas antraciclinas e nos agentes anti-HER2. A prevenção da cardiotoxicidade deve iniciar-se antes do tratamento do câncer, com uma avaliação do risco cardiovascular do paciente e uma interação entre o cardiologista e o oncologista, a fim de programar a melhor abordagem durante o tratamento oncológico. Os pacientes sob maior risco de desenvolver a cardiotoxicidade são aqueles que apresentam os fatores de risco clássicos para doença cardiovascular (hipertensão arterial, diabetes melito, dislipidemia, tabagismo, obesidade, sedentarismo, entre outros) ou aqueles sob maior exposição a fármacos cardiotóxicos (doses cumulativas altas para antraciclinas, associações de fármacos cardiotóxicas e antecedente de quimioterapia ou radioterapia). , As principais recomendações para prevenção da cardiotoxicidade estão descritas na . Entre as medicações cardioprotetoras, o dexrazoxano, um quelante de ferro, é a única medicação aprovada para a prevenção da cardiotoxicidade. Seu efeito contra a cardiotoxicidade por antraciclinas já foi comprovado em diversos estudos tanto na população adulta como na pediátrica. - As limitações para o uso do dexrazoxano é o custo elevado e alguns potenciais efeitos adversos, tais como interferência na eficácia das antraciclinas, risco de desenvolvimento de tumores secundários (evidência controversa) , e toxicidade medular. Seu uso está indicado em adultos com câncer de mama com estágio avançado ou metastático que receberam uma dose cumulativa prévia de 300 mg/m 2 de doxorrubicina, 540 mg/m 2 de epirrubicina, quando necessário a continuidade do tratamento com antraciclinas. O uso de drogas cardiovasculares como betabloqueadores, IECA e bloqueadores do receptor da angiotensina (BRA) na prevenção da cardiotoxicidade secundária às antraciclinas é controverso e se baseia em poucos ensaios clínicos. - Algumas evidências demonstraram benefícios dos betabloqueadores e IECA em pacientes que utilizaram doses cumulativas de antraciclinas elevadas ou em pacientes de alto risco, com troponina positiva durante a quimioterapia. , Em doses cumulativas de antraciclina mais baixas, esse benefício não foi evidênciado com betabloqueadores, , mas houve uma discreta prevenção com o uso de BRA. O ensaio clínico CECCY, um estudo brasileiro, testou o uso de betabloqueadores para prevenção primária da cardiotoxicidade por antraciclinas, não demonstrou benefício do uso do carvedilol na prevenção relacionada a antraciclinas. Entretanto, o carvedilol esteve associado a valores atenuados de troponina e menor porcentagem de pacientes com aparecimento de disfunção diastólica. Em relação ao uso do trastuzumabe, alguns estudos também apontam benefício para o uso de drogas cardiovasculares tanto na prevenção da cardiotoxicidade , quanto após o aparecimento da cardiotoxicidade, auxiliando na recuperação da disfunção ventricular. A decisão da suspensão do tratamento quimioterápico, bem como seu retorno, deve ser feita em conjunto, pesando o risco e o benefício da manutenção do tratamento oncológico.
6.8.1. Fatores Causais A miocardite em crianças e adolescentes apresenta particularidades em sua etiologia, e o seu diagnóstico pode ser subestimado pela similaridade de sua apresentação inicial com inúmeras viroses comuns na infância. Estima- se que mais de 83% dos pacientes compareceram aos serviços de urgência por duas ou mais visitas antes do diagnóstico. Nas análises retrospectivas, a dor torácica foi referida predominantemente em crianças maiores de 10 anos, e os sinais mais comuns observados nos mais jovens foram taquipneia, febre e desconforto respiratório ( ). A aplicação de algoritmos para o diagnóstico em salas de emergência tem se mostrado promissora, com a possibilidade de aumentar o número de pacientes suspeitos ( ). , Em relação à etiologia, estudos avaliando a coleta de painel viral no quadro agudo e a confirmação por biópsias, observamos o parvovírus B19 como predominante, seguido pelos enterovírus, Coxsackievírus B e herpes-vírus humano. Casos relacionados aos arbovírus – responsáveis por dengue, Zika e Chikungunya – t =êm sido descritos em regiões endêmicas ao redor do mundo. Mais recentemente, com a pandemia de SARS-Cov2, têm sido relatadas apresentações com agressão miocárdica associada ou não à síndrome inflamatória multissistêmica com fisiopatologia ainda pouco esclarecida. Sobreviventes ao tratamento dos cânceres da infância, principalmente os submetidos ao tratamento com antracíclicos e inibidores de checkpoint , constituem uma parcela de alto risco à instalação do processo inflamatório levando à IC na idade adulta. 6.8.2. Prognóstico É difícil estimar a incidência e a prevalência de miocardite na faixa etária pediátrica é devido ao amplo espectro de sintomas, que pode variar desde um quadro viral leve sem comprometimento hemodinâmico até um quadro de IC congestiva, com disfunção ventricular, arritmias e morte súbita. , - Como os sintomas, muitas vezes, são inespecíficos, um significativo número de casos não é diagnosticado, o que dificulta a caracterização da real incidência e prognóstico. Entretanto, é a principal etiologia da miocardiopatia dilatada em crianças. Com a melhora das unidades de terapia intensiva, incluindo a possibilidade de suporte mecânico à circulação, o prognóstico de crianças de todas as faixas etárias tem melhorado, com possibilidade de recuperação completa mesmo de casos com doença fulminante. Os principais desfechos em paciente pediátricos incluem recuperação completa, progressão para miocardiopatia dilatada e morte ou transplante cardíaco. Acredita-se que, em crianças com miocardite viral, o prognóstico tende a ser melhor do que nas miocardiopatias dilatadas. A sobrevida de pacientes pediátricos com miocardite pode ser de até 93%. Entretanto, um estudo multicêntrico englobando todas as faixas etárias demonstrou que existe uma significativa mortalidade em neonatos e lactentes. A sobrevida nessa faixa etária foi de 33% a 45%, e a melhora clínica, de 23% a 32%. Em crianças entre 1 e 18 anos, a sobrevida foi melhor, em torno de 78% e 80%, e a melhora clínica, entre 46% e 67%. Em um estudo recente do Pediatric Cardiomyopathy Registry (PCMR), crianças com miocardite confirmada por biópsia tiveram uma sobrevida de 75% em 3 anos, e 54% do grupo normalizaram as dimensões e função ventricular, e apenas 20% permaneceram com anormalidades ecocardiográficas. Em outro estudo com 28 pacientes com diagnóstico de miocardite, foi observado que apenas 17 sobreviveram e tiveram alta hospitalar, com vários graus de melhora da função cardíaca. Os demais 11 pacientes evoluíram para IC refratária, sendo necessário transplante cardíaco em sete casos, e ocorreu óbito em quatro casos. Preditores de mau prognóstico foram: fração de ejeção abaixo de 30%, fração de encurtamento abaixo de 15%, dilatação ventricular esquerda, regurgitação mitral moderada a severa. Várias séries de casos envolvendo crianças que necessitaram de suporte mecânico à circulação por miocardite reportam taxa de sobrevida entre 67% e 83%. Para 21 pacientes com suporte mecânico com Berlin Heart Excor por miocardite ou miocardiopatia dilatada, 90% sobreviveram com alta hospitalar. O prognóstico em miocardite comprovada por BEM depende da gravidade dos sintomas, da classificação histológica e biomarcadores. Miocardite aguda fulminante é associada com melhor sobrevida. Miocardite por células gigantes, apesar de rara, é associada com mau prognóstico, com uma sobrevida média de 5,5 meses, com uma taxa de mortalidade ou transplante de 89%. Miocardite contribui para pelo menos 50% das miocar- diopatias dilatadas na infância. O desfecho de pacientes com miocardite viral é melhor que aqueles com miocardiopatia dilatada. Por esse motivo, deve-se sempre suspeitar de miocardite, instituir medidas de suporte precocemente, evitando que um paciente com miocardite seja encaminhado à lista de transplante sem a chance de recuperação. A indicação de transplante na miocardite só deve ser considerada quando a recuperação for desfavorável, apesar do manejo terapêutico adequado ( ). A utilização de imunoglobulina (IVIG) tem se tornado parte do tratamento imunomodulatório em crianças com miocardite aguda em muitos centros, na dose standard de 2g/kg em 24 horas. Essa prática tem sido instituída desde a clássica publicação de Drucker et al., em 1994. Foi demonstrada uma tendência à recuperação da função ventricular naqueles que receberam imunoglobulina. Em uma coorte de 94 pacientes com miocardiopatia de início recente, IVIG foi administrada em 22% dos pacientes, e o seguimento de 5 anos demonstrou uma maior taxa de recuperação quando comparados com os demais pacientes que não receberam imunoglobulina. Em um estudo realizado em Taiwan com 94 pacientes, a avaliação da curva ROC identificou que a fração de ejeção <42% (sensibilidade 86,7% e especificidade de 82,8%) e a dosagem de troponina I >45ng/mL (sensibilidade de 62,6% e especificidade de 91%) tiveram a maior associação com mortalidade. Vários estudos demonstraram que os pacientes que sobrevivem à fase aguda inicial têm um desfecho mais favorável a longo prazo, ao contrário daqueles com doença mais insidiosa. Evidência histológica de miocardite como causa de miocardiopatia dilatada tem sido considerada um indicador prognóstico positivo para recuperação, com chances de cura entre 50% e 80% em 2 anos. Da mesma forma, a evolução para IC crônica com necessidade de transplante cardíaco pode ocorrer tardiamente mesmo após a melhora clínica inicial.
A miocardite em crianças e adolescentes apresenta particularidades em sua etiologia, e o seu diagnóstico pode ser subestimado pela similaridade de sua apresentação inicial com inúmeras viroses comuns na infância. Estima- se que mais de 83% dos pacientes compareceram aos serviços de urgência por duas ou mais visitas antes do diagnóstico. Nas análises retrospectivas, a dor torácica foi referida predominantemente em crianças maiores de 10 anos, e os sinais mais comuns observados nos mais jovens foram taquipneia, febre e desconforto respiratório ( ). A aplicação de algoritmos para o diagnóstico em salas de emergência tem se mostrado promissora, com a possibilidade de aumentar o número de pacientes suspeitos ( ). , Em relação à etiologia, estudos avaliando a coleta de painel viral no quadro agudo e a confirmação por biópsias, observamos o parvovírus B19 como predominante, seguido pelos enterovírus, Coxsackievírus B e herpes-vírus humano. Casos relacionados aos arbovírus – responsáveis por dengue, Zika e Chikungunya – t =êm sido descritos em regiões endêmicas ao redor do mundo. Mais recentemente, com a pandemia de SARS-Cov2, têm sido relatadas apresentações com agressão miocárdica associada ou não à síndrome inflamatória multissistêmica com fisiopatologia ainda pouco esclarecida. Sobreviventes ao tratamento dos cânceres da infância, principalmente os submetidos ao tratamento com antracíclicos e inibidores de checkpoint , constituem uma parcela de alto risco à instalação do processo inflamatório levando à IC na idade adulta.
É difícil estimar a incidência e a prevalência de miocardite na faixa etária pediátrica é devido ao amplo espectro de sintomas, que pode variar desde um quadro viral leve sem comprometimento hemodinâmico até um quadro de IC congestiva, com disfunção ventricular, arritmias e morte súbita. , - Como os sintomas, muitas vezes, são inespecíficos, um significativo número de casos não é diagnosticado, o que dificulta a caracterização da real incidência e prognóstico. Entretanto, é a principal etiologia da miocardiopatia dilatada em crianças. Com a melhora das unidades de terapia intensiva, incluindo a possibilidade de suporte mecânico à circulação, o prognóstico de crianças de todas as faixas etárias tem melhorado, com possibilidade de recuperação completa mesmo de casos com doença fulminante. Os principais desfechos em paciente pediátricos incluem recuperação completa, progressão para miocardiopatia dilatada e morte ou transplante cardíaco. Acredita-se que, em crianças com miocardite viral, o prognóstico tende a ser melhor do que nas miocardiopatias dilatadas. A sobrevida de pacientes pediátricos com miocardite pode ser de até 93%. Entretanto, um estudo multicêntrico englobando todas as faixas etárias demonstrou que existe uma significativa mortalidade em neonatos e lactentes. A sobrevida nessa faixa etária foi de 33% a 45%, e a melhora clínica, de 23% a 32%. Em crianças entre 1 e 18 anos, a sobrevida foi melhor, em torno de 78% e 80%, e a melhora clínica, entre 46% e 67%. Em um estudo recente do Pediatric Cardiomyopathy Registry (PCMR), crianças com miocardite confirmada por biópsia tiveram uma sobrevida de 75% em 3 anos, e 54% do grupo normalizaram as dimensões e função ventricular, e apenas 20% permaneceram com anormalidades ecocardiográficas. Em outro estudo com 28 pacientes com diagnóstico de miocardite, foi observado que apenas 17 sobreviveram e tiveram alta hospitalar, com vários graus de melhora da função cardíaca. Os demais 11 pacientes evoluíram para IC refratária, sendo necessário transplante cardíaco em sete casos, e ocorreu óbito em quatro casos. Preditores de mau prognóstico foram: fração de ejeção abaixo de 30%, fração de encurtamento abaixo de 15%, dilatação ventricular esquerda, regurgitação mitral moderada a severa. Várias séries de casos envolvendo crianças que necessitaram de suporte mecânico à circulação por miocardite reportam taxa de sobrevida entre 67% e 83%. Para 21 pacientes com suporte mecânico com Berlin Heart Excor por miocardite ou miocardiopatia dilatada, 90% sobreviveram com alta hospitalar. O prognóstico em miocardite comprovada por BEM depende da gravidade dos sintomas, da classificação histológica e biomarcadores. Miocardite aguda fulminante é associada com melhor sobrevida. Miocardite por células gigantes, apesar de rara, é associada com mau prognóstico, com uma sobrevida média de 5,5 meses, com uma taxa de mortalidade ou transplante de 89%. Miocardite contribui para pelo menos 50% das miocar- diopatias dilatadas na infância. O desfecho de pacientes com miocardite viral é melhor que aqueles com miocardiopatia dilatada. Por esse motivo, deve-se sempre suspeitar de miocardite, instituir medidas de suporte precocemente, evitando que um paciente com miocardite seja encaminhado à lista de transplante sem a chance de recuperação. A indicação de transplante na miocardite só deve ser considerada quando a recuperação for desfavorável, apesar do manejo terapêutico adequado ( ). A utilização de imunoglobulina (IVIG) tem se tornado parte do tratamento imunomodulatório em crianças com miocardite aguda em muitos centros, na dose standard de 2g/kg em 24 horas. Essa prática tem sido instituída desde a clássica publicação de Drucker et al., em 1994. Foi demonstrada uma tendência à recuperação da função ventricular naqueles que receberam imunoglobulina. Em uma coorte de 94 pacientes com miocardiopatia de início recente, IVIG foi administrada em 22% dos pacientes, e o seguimento de 5 anos demonstrou uma maior taxa de recuperação quando comparados com os demais pacientes que não receberam imunoglobulina. Em um estudo realizado em Taiwan com 94 pacientes, a avaliação da curva ROC identificou que a fração de ejeção <42% (sensibilidade 86,7% e especificidade de 82,8%) e a dosagem de troponina I >45ng/mL (sensibilidade de 62,6% e especificidade de 91%) tiveram a maior associação com mortalidade. Vários estudos demonstraram que os pacientes que sobrevivem à fase aguda inicial têm um desfecho mais favorável a longo prazo, ao contrário daqueles com doença mais insidiosa. Evidência histológica de miocardite como causa de miocardiopatia dilatada tem sido considerada um indicador prognóstico positivo para recuperação, com chances de cura entre 50% e 80% em 2 anos. Da mesma forma, a evolução para IC crônica com necessidade de transplante cardíaco pode ocorrer tardiamente mesmo após a melhora clínica inicial.
6.9.1. Diagnóstico e Tratamento Miocardites e pericardites são doenças que, com frequência, se apresentam associadas na prática clínica e representam diferentes espectros dentro do grupo das síndromes inflamatórias miopericárdicas ( ). , Isso se deve ao fato de ambas as doenças apresentarem agentes etiológicos comuns (especialmente virais). No entanto, raramente o acometimento miocárdico e pericárdico ocorre na mesma intensidade. O mais comum é haver predomínio da miocardite (perimiocardite) ou da pericardite (miopericardite). A distinção entre as diferentes formas de apresentação é relevante por ter impacto no prognóstico e tratamento. A miopericardite usualmente tem boa evolução, sem IC ou pericardite constritiva. - No cenário da miocardite aguda, o acometimento pericárdico (perimiocardite) tem importância prognóstica. No estudo de Di Bella et al., que avaliou uma coorte de 467 pacientes com miocardite aguda viral/idiopática diagnosticada pela RMC, observou-se que aproximadamente 24% dos pacientes tinham acometimento pericárdico. Além disso, a presença de pericardite aumentou em 2,5 vezes o risco de eventos cardíacos (desfecho combinado de morte, transplante cardíaco, implante de CDI, hospitalização por IC descompensada). O diagnóstico de miocardite associada à pericardite aguda deve ser suspeitado nos pacientes que apresentam diagnóstico de miocardite e pelo menos dois dos seguintes critérios: dor torácica de caráter pleurítico, que pode ser difícil de identificar pela presença de dor pelo acometimento miocárdico; atrito pericárdico; alterações eletrocardiográficas sugestivas de pericardite com infra do segmento PR e supra de ST difuso com a concavidade para cima; derrame pericárdico novo ou piora do preexistente. O laboratório geralmente revela leucocitose com predomínio de linfócitos (nos quadros virais) e elevação da PCR e velocidade de hemossedimentação (VHS). A RMC é o exame não invasivo com melhor acurácia para avaliação de acometimento pericárdico no paciente com miocardite. , O exame revela a presença de inflamação, espessamento, derrame e massas no pericárdio, e está indicado em todo os casos com dúvida diagnóstica (grau de recomendação I, nível de evidência C). , No paciente com miocardite e acometimento pericárdico, o tratamento deve seguir as recomendações para tratamento da miocardite e depende essencialmente da causa de base. Nos casos virais/idiopáticos sem disfunção ventricular, o uso de AINEs para controle da lesão pericárdica deve ser considerado com cautela, em doses reduzidas, uma vez que, em estudos experimentais, os AINES revelaram aumento de mortalidade e piora da inflamação miocárdica. , ,
Miocardites e pericardites são doenças que, com frequência, se apresentam associadas na prática clínica e representam diferentes espectros dentro do grupo das síndromes inflamatórias miopericárdicas ( ). , Isso se deve ao fato de ambas as doenças apresentarem agentes etiológicos comuns (especialmente virais). No entanto, raramente o acometimento miocárdico e pericárdico ocorre na mesma intensidade. O mais comum é haver predomínio da miocardite (perimiocardite) ou da pericardite (miopericardite). A distinção entre as diferentes formas de apresentação é relevante por ter impacto no prognóstico e tratamento. A miopericardite usualmente tem boa evolução, sem IC ou pericardite constritiva. - No cenário da miocardite aguda, o acometimento pericárdico (perimiocardite) tem importância prognóstica. No estudo de Di Bella et al., que avaliou uma coorte de 467 pacientes com miocardite aguda viral/idiopática diagnosticada pela RMC, observou-se que aproximadamente 24% dos pacientes tinham acometimento pericárdico. Além disso, a presença de pericardite aumentou em 2,5 vezes o risco de eventos cardíacos (desfecho combinado de morte, transplante cardíaco, implante de CDI, hospitalização por IC descompensada). O diagnóstico de miocardite associada à pericardite aguda deve ser suspeitado nos pacientes que apresentam diagnóstico de miocardite e pelo menos dois dos seguintes critérios: dor torácica de caráter pleurítico, que pode ser difícil de identificar pela presença de dor pelo acometimento miocárdico; atrito pericárdico; alterações eletrocardiográficas sugestivas de pericardite com infra do segmento PR e supra de ST difuso com a concavidade para cima; derrame pericárdico novo ou piora do preexistente. O laboratório geralmente revela leucocitose com predomínio de linfócitos (nos quadros virais) e elevação da PCR e velocidade de hemossedimentação (VHS). A RMC é o exame não invasivo com melhor acurácia para avaliação de acometimento pericárdico no paciente com miocardite. , O exame revela a presença de inflamação, espessamento, derrame e massas no pericárdio, e está indicado em todo os casos com dúvida diagnóstica (grau de recomendação I, nível de evidência C). , No paciente com miocardite e acometimento pericárdico, o tratamento deve seguir as recomendações para tratamento da miocardite e depende essencialmente da causa de base. Nos casos virais/idiopáticos sem disfunção ventricular, o uso de AINEs para controle da lesão pericárdica deve ser considerado com cautela, em doses reduzidas, uma vez que, em estudos experimentais, os AINES revelaram aumento de mortalidade e piora da inflamação miocárdica. , ,
Estudos anteriores indicaram que 2,6% a 25% dos pacientes com suspeita de IAM revelaram-se como IAM sem doença arterial coronariana obstrutiva (MINOCA; do inglês, myocardial infarction with non-obstructive coronary artery ). Existem várias etiologias que podem ser atribuídas aos indivíduos com suspeita de IAM, mas com angiogramas sem lesões culpadas, dentre as quais miocardite aguda tem sido reconhecida como um fator particularmente importante. É comum que as apresentações clínicas típicas do IAM, como dor no peito, elevação do segmento ST e marcadores séricos incrementais, apareçam em pacientes diagnosticados com miocardite. , Além disso, no cenário clínico de doença aguda com elevação de troponina, pode ser clinicamente desafiador diferenciar um IAM tipo 2 de causas de lesão miocárdica sem isquemia, principalmente a miocardite. O IAM tipo 2 é aquele secundário à isquemia devido ao aumento da demanda de oxigênio ou diminuição da oferta, causado, por exemplo, por espasmo da artéria coronária, embolia coronária, anemia, arritmias, hipertensão ou hipotensão. O termo isquemia miocárdica é utilizado quando há evidências de valores elevados de troponina com pelo menos um valor acima do limite superior de referência (URL) do percentil 99. O termo injúria miocárdica é considerado se houver aumento e/ou queda dos valores de troponina. O diagnóstico de IAM é específico quando há lesão miocárdica aguda associado à evidência clínica de isquemia miocárdica aguda, exigindo tanto detecção de um aumento e/ou queda dos valores de troponina e a presença de pelo menos uma das seguintes condições: sintomas de isquemia miocárdica, alterações isquêmicas novas no ECG, desenvolvimento de ondas Q patológicas, evidência de imagem de nova perda de miocárdio viável ou de novas anormalidades do movimento da parede em um padrão consistente com um quadro isquêmico e/ou identificação de trombo coronário por angiografia ou autópsia. As principais entidades clínicas que podem simular um IAM com supradesnível do segmento ST são: miocardite/pericardite, cardiomiopatia de Takotsubo, síndromes da onda J (usado para descrever tanto a síndrome de Brugada quanto a síndrome de repolarização precoce), anormalidades secundárias da repolarização (como bloqueio de ramo esquerdo, marca-passo ventricular e hipertrofia ventricular), distúrbios eletrolíticos (hipercalemia e hipercalcemia) e outras causas não isquêmicas (como síndrome de Wolff-Parkinson-White, embolia pulmonar, hemorragia intracraniana, hipotermia e pós-parada cardiorrespiratória), porém as alterações eletrocardiográficas evolutivas podem ajudar na diferenciação, além das diferenças nas histórias clínicas. A caracterização tecidual in vivo com RMC permite a identificação de edema/inflamação nas síndromes coronarianas agudas/miocardite e diagnóstico de doenças crônicas e condições fibróticas (p. ex., em cardiomiopatias hipertróficas e dilatadas, estenose aórtica e amiloidose). Na doença não isquêmica, o padrão e a distribuição do realce tardio (LGE; do inglês, late gadolinium enhancement ) podem oferecer pistas sobre etiologia e significado prognóstico. A miocardite geralmente causa cicatrizes subepicárdicas/mesocárdicas, geralmente (embora nem sempre) em uma distribuição não coronariana, poupando o subendocárdio , Na miocardite, a imagem ponderada em T2 também pode identificar regiões de inflamação, caracteristicamente, em uma distribuição não coronariana. Por outro lado, o mapeamento T1 paramétrico está também disponível, fornecendo avaliação quantitativa e objetiva do edema/inflamação (p. ex., no IAM/miocardite). , Existe uma interação dinâmica entre inflamação e fibrose em vários precursores de IC, como IAM e miocardite. O diagnóstico precoce de IC com biomarcadores e imagem é fundamental; enquanto a RMC é útil para avaliar a extensão da lesão, medições seriadas de biomarcadores indicam se inflamação e fibrose são progressivas. Clinicamente, caso de miocardite simulando IAM é extremamente complexo para os médicos fazerem um diagnóstico preciso. A definição da anatomia coronariana é mandatória, seja com a coronariografia ou com a angiotomografia de coronárias. Além disso, um diagnóstico correto de miocardite, por si só, é um desafio devido a padrões não específicos de sua apresentação clínica e a falta de um método de diagnóstico preciso e confiável. Embora seja recomendada a realização de BEM nas diretrizes como método ideal, o diagnóstico de miocardite na prática rotineira é geralmente baseado em considerações abrangentes do histórico médico dos pacientes, manifestações clínicas e exames complementares, dentre os quais a RMC, que tem vantagem significativa na detecção de anormalidades do miocárdio e na discriminação precisa de pacientes com miocardite daqueles com IAM verdadeiro. - , - Na , sugerimos um fluxograma de avaliação de paciente com IAM versus miocardite.
Em 2018, a Organização Mundial da Saúde (OMS) reconheceu a endemia de febre reumática em países de baixa renda e orientou ação global focada em prevenção, diagnóstico e profilaxia secundária. A febre reumática é uma doença bifásica, cujo surto agudo manifesta-se com variáveis combinações de artrite, cardite, coreia, lesões cutâneas e subcutâneas, e a miocardite ocorre em mais de 50% dos pacientes. Cerca de 5% dos pacientes com miocardite reumática aguda apresentam manifestação clínica significativa que motivam atendimento médico e até 50% dos pacientes com cardite aguda evoluem para cardiopatia reumática crônica (fase tardia), caracteristicamente valvopatia mitral e/ou aórtica. , A prevalência de cardite reumática em nosso meio não é conhecida, mas várias informações evidenciam que se trata de condição frequente e subdiagnosticada. Em 2013, o Sistema Único de Saúde (SUS) brasileiro informou que ocorreram 5.169 hospitalizações relacionadas à febre reumática aguda. Estima-se que, atualmente, cerca de 40 milhões de pessoas ao redor do mundo tenham cardiopatia reumática crônica e que essa doença leve a aproximadamente 300.000 mortes por ano. Um estudo brasileiro realizado no estado de Minas Gerais, em 5.996 estudantes de 21 escolas, encontrou 0,42% de prevalência de cardiopatia reumática crônica, número 2 a 10 vezes maior à média documentada em países desenvolvidos. A suspeita de cardite reumática deve ser feita concomitante à suspeita de surto agudo de febre reumática, inicialmente por meio da aplicação dos critérios de Jones, que foram revisados em 2015. Recomenda-se estratificar epidemiologicamente o risco de causa reumática, sendo considerados de alto risco pacientes oriundos de regiões cuja incidência de febre reumática é maior que 2 por 100 mil escolares (5 a 14 anos) por ano, ou prevalência de sequela valvar reumática maior que 1 por 1.000 pessoas por ano. Estima-se que grande parte da população brasileira resida em regiões com essas características. Também houve inclusão de critérios ecocardiográficos e expansão da utilização dos critérios para diagnóstico de recidiva ( ). Portanto, a etiologia reumática deve ser considerada em pacientes com cardite em nosso meio, principalmente jovens, em regiões de baixa renda e/ou com antecedente de valvopatia reumática. Quando há documentação de surto agudo de febre reumática ou manifestação clínica de IC, é fundamental a busca ativa por cardite reumática. A cardite reumática é uma pancardite, acometendo em grau variável pericárdio, miocárdio e endocárdio, sendo esta a principal manifestação: valvulite aguda – presente em 90% dos casos, caracteristicamente por valvopatia regurgitativa aguda mitral e/ou aórtica. Quando há sintomas, o principal mecanismo é a valvopatia aguda (preferencialmente mitral) e, menos frequente e com menos intensidade, miocardite e pericardite. Portanto, o foco inicial da investigação é a detecção da valvopatia, podendo ser reconhecida por exame físico, mas é mandatória a realização ecocardiográfica, inicialmente transtorácica, reservando-se a avaliação transesofágica para situações infrequentes de janela inadequada. O ECG de 12 derivações, além do alargamento do intervalo PR, pode demonstrar QT longo e alterações compatíveis com pericardite e sobrecarga de câmaras esquerdas. Habitualmente, não há elevação de troponina e CKMB, indicando que o dano miocárdico é pequeno. , A radiografia de tórax pode ser útil em documentar cardiomegalia e congestão. Após essa avaliação inicial, a hipótese diagnóstica pode ser: , Cardite subclínica: exame clínico sem alterações de alerta, ECG apenas com intervalo PR prolongado e/ou ecodopplercardiograma evidenciando regurgitação leve mitral e/ou aórtica Cardite leve: taquicardia desproporcional à febre, sopro regurgitativo identificável, ECG apenas com intervalo PR prolongado, radiografia de tórax sem alterações de alerta e ecodopplercardiograma evidenciando regurgitação leve a moderada mitral e/ou aórtica Cardite moderada: critérios da cardite leve associados a sintomas leves de IC e/ou QT longo e/ou cardiomegalia e congestão na radiografia e/ou dilatação leva a moderada de câmaras esquerdas Cardite grave: sintomas limitantes de IC com regurgitação valvar importante e/ou cadiomegalia importante e/ou disfunção ventricular sistólica. Dessa forma, a miocardite reumática, em si, é pouco exuberante, deve ser suspeitada quando há critérios para cardite reumática, IC manifesta, sem valvopatia aguda anatomicamente importante. Nessa situação, também é fundamental a avaliação minuciosa de possível diagnóstico diferencial de miocardite. Pacientes com quadros leves, moderados e graves devem ter continuação da investigação diagnóstica com exames de imagem. A cintilografia com Gálio-67 apresenta alta sensibilidade e especificidade, é o exame mais estudado e deve ser o primeiro a ser realizado. , Cintilografia antimiosina é menos sensível, assim como o PET scan , sendo ambos opções em indisponibilidade do Gálio, ou quando há evidência de outros diagnósticos diferenciais. , A RM carece de trabalhos específicos para febre reumática, ainda mais que o acometimento é prioritariamente valvar, sendo mais um exame útil em diagnósticos diferenciais. A BEM apresenta baixa sensibilidade, porém altíssima especificidade, sendo o achado de nódulos de Aschoff patognomônico de miocardite reumática. Sua indicação é em casos graves e refratários ( ). Para todos os pacientes com cardite reumática, apesar de se tratar de resposta imune tardia, recomenda-se erradicação estreptocócica. O tratamento da forma subclínica e leve implica controle dos sintomas associados ao surto agudo e monitoramento da evolução. A forma moderada e grave implica uso de corticosteroides, inicialmente via oral, e pulsoterapia se refratariedade. - Medicações como iECA, furosemida, espironolactona e digoxina devem ser usadas se IC manifesta. A refratariedade implica ponderar tratamento cirúrgico valvar na fase aguda ( ). -
O envolvimento cardíaco nas doenças autoimunes pode incluir pericárdio, miocárdio, endocárdio, valvas e coronárias. Dentre as entidades, merecem destaque em relação à miocardite, a sarcoidose, a miocardite de células gigantes, a doença de Behçet, a granulomatose eosinofílica com poliangeíte, o LES, a esclerodermia e a artrite reumatoide. Existe uma evidente limitação em relação ao diagnóstico de miocardite e sua prevalência nas doenças autoimunes, porém devemos considerar essa possibilidade quando da presença de sinais e sintomas sugestivos de acometimento cardíaco, quer seja com arritmias, síncope, IC, dor torácica e elevação de marcadores de necrose miocárdica, especialmente em pacientes com antecedente de doença autoimune ou quando existe acometimento cardíaco associado a sintomas de inflamação atingindo outros sistemas. A elevação de marcadores inflamatórios inespecíficos, incluindo PCR/VHS e de lesão miocárdica, como troponina e BNP, habitualmente está presente, porém sem especificidade. ECG e ECO devem ser realizados para todos os pacientes com doenças autoimunes na suspeita de acometimento cardíaco. , A RM pode ser utilizada como método sensível e específico para avaliação de miocardite, além de ampliar o raciocínio em relação a diagnósticos diferenciais. , Outro método não invasivo tem sido o PET, especialmente em situações de suspeita de sarcoidose. A solicitação de marcadores de autoimunidade, como FAN, fator reumatoide e ANCA, deve ser considerada e orientada pela suspeita clínica. A BEM é o padrão-ouro para o diagnóstico de miocardite, quer seja por doenças autoimunes ou outras etiologias; mediante utilização de técnicas além da histologia, consegue diferenciar o acometimento infeccioso em relação ao não infeccioso; além disso, pode identificar a presença de vasculite ou outras doenças miocárdicas não inflamatórias. O tratamento da miocardite pelas doenças autoimunes foi discutido em outra sessão deste documento. A elevação de marcadores inflamatórios inespecíficos, incluindo PCR/VHS e de lesão miocárdica, como troponina e BNP, habitualmente está presente, porém sem especificidade. ECG e ECO devem ser realizados para todos os pacientes com doenças autoimunes na suspeita de acometimento cardíaco. , A RM pode ser utilizada como método sensível e específico para avaliação de miocardite, além de ampliar o raciocínio em relação a diagnósticos diferenciais. , Outro método não invasivo tem sido o PET, especialmente em situações de suspeita de sarcoidose. A solicitação de marcadores de autoimunidade, como FAN, fator reumatoide e ANCA, deve ser considerada e orientada pela suspeita clínica. A BEM é o padrão-ouro para o diagnóstico de miocardite, quer seja por doenças autoimunes ou outras etiologias; mediante utilização de técnicas além da histologia, consegue diferenciar o acometimento infeccioso em relação ao não infeccioso; além disso, pode identificar a presença de vasculite ou outras doenças miocárdicas não inflamatórias. O tratamento da miocardite pelas doenças autoimunes foi discutido em outra sessão deste documento.
9.1. Avaliação Não Invasiva e Invasiva das Arritmias na Fase Aguda e Crônica das Diversas Causa das Miocardites As arritmias cardíacas são manifestações relativamente frequentes no paciente com miocardite, podendo ocorrer em qualquer fase da doença. Os mecanismos arritmogênicos estão direta ou indiretamente relacionados ao grau de agressão inflamatória miocárdica. Na fase aguda pela agressão viral e da resposta inflamatória, temos miocitólise associada à fibrose, que promovem a hiperatividade do sistema simpático e disfunção dos canais iônicos, especialmente na regulação do cálcio, criando o substrato eletrofisiológico para geração de arritmias. Quanto maior o dano celular e o grau de comprometimento inflamatório, maior a probabilidade de ocorrência de arritmias ventriculares, sendo a reentrada o principal mecanismo arritmogênico. Um amplo espectro de bradiarritmias e taquiarritmias ocorre no contexto da miocardite. BAV, alterações da repolarização ventricular e o prolongamento do intervalo QT são achados comuns na fase aguda da doença. Fibrilação atrial e taquicardias atriais também podem estar presentes no curso da miocardite aguda ou na fase crônica. As arritmias ventriculares podem se manifestar através de extrassístoles e/ou taquicardias ventriculares. Estas podem ter um caráter monomórfico ou polimórfico e apresentarem-se de forma não sustentada ou sustentada (duração ≥30 segundos). Os sintomas variam de acordo com a forma de apresentação da arritmia, o estado hemodinâmico e o grau de disfunção ventricular esquerda, podendo se manifestar por meio de palpitações, taquicardias, síncope ou morte súbita. Os métodos diagnósticos diretos utilizados para avaliação não invasiva das arritmias são o ECG basal de 12 derivação, a eletrocardiografia ambulatorial contínua por 24 ou 48 horas (sistema Holter ) e o monitoramento de eventos (sistema Looper). O ECG é geralmente alterado nos pacientes com miocardite, porém tais achados apresentam baixa sensibilidade e especificidade. Ukena et al. relataram que a duração prolongada do complexo QRS é um preditor independente para morte cardíaca ou transplante cardíaco em pacientes com suspeita de miocardite. O prolongamento do intervalo QTc acima de 440 ms, o desvio do eixo QRS e a presença de ectopias ventriculares, presentes no curso da miocardite, não parecem ser preditores independentes de pior prognóstico. O ECG é uma ferramenta de grande utilidade na detecção de bradiarritmias e taquiarritmias que se apresentam de forma sustentada. Para documentação das arritmias que apresentam um caráter paroxístico, o monitoramento de eletrocardiografia ambulatorial pode ser utilizado. A duração do monitoramento depende da frequência dos sintomas e quanto mais esporádico mais difícil a sua documentação. Através da eletrocardiografia ambulatorial por 24 horas ( Holter ), é possível a documentação de arritmias e anormalidades da condução atrioventricular. O Holter também nos auxilia na análise da distribuição nictemeral das arritmias, do sistema nervoso autônomo e do provável mecanismo eletrofisiológico. Recomendamos a realização do Holter 24 horas na fase hospitalar para avaliação de possíveis arritmias assintomáticas e anormalidades intermitentes da condução atrioventricular ( ). O Holter também pode ser recomendado na fase crônica da miocardite como método auxiliar para estratificação de risco de morte súbita. O real papel da avaliação invasiva por meio do estudo eletrofisiológico para estratificação do risco de morte súbita é ainda ponto de investigação em pacientes com miocardites. Um dos pontos a se considerar é que a reprodutibilidade de eventos arrítmicos significativos deve variar de acordo com a etiologia e o tipo de acometimento miocárdico. Na sarcoidose cardíaca, por exemplo, encontramos alto grau de reprodutibilidade de eventos clínicos significativos com a estimulação elétrica programada, sendo esta útil na tomada de decisão. Nos pacientes que apresentaram taquicardia ventricular monomórfica não sustentada ou sustentada em algum momento da doença, importante realce tardio ou zonas de baixa voltagem ao estudo eletrofisiológico com mapeamento eletroanatômico parecem ter pior prognóstico, e estes achados podem auxiliar na estratificação do risco de morte súbita. Na ausência de dados específicos, recomenda-se a utilização cautelosa desse método de estratificação do risco de morte súbita nesses pacientes, especialmente nos assintomáticos. 9.2. Tratamento de Arritmias e Prevenção da Morte Súbita na Fase Aguda e Subaguda As arritmias podem estar associadas à miocardite principalmente na fase aguda, mas também na fase crônica, dependendo do grau de lesão tecidual, nos quais se destacam a inflamação e a fibrose residual, mas com uma base fisiológica ampla , ( ). Podem estar presentes em 33,7% dos casos internados por miocardite, se apresentando tanto por taqui, quanto bradiarritmias e estão associados a morbidades como hipertireioidismo, idade, obesidade, IC, desequilíbrio eletrolítico e doença valvar. A preexistência de cardiomiopatias como displasia arritmogênica do VD e canalopatias preexistentes está também associada à ocorrência de arritmias durante inflamação miocárdica. , As bradiarritmias são geralmente associadas aos BAV, que podem ser de vários graus e ocorrem predominantemente na fase aguda; mesmo assim, são raras. Obongayo et al. observaram uma prevalência de 1,7% de BAV, sendo somente 1,1% de bloqueios avançados na fase intra-hospitalar de 31.760 pacientes internados com o diagnóstico de miocardite a partir do banco de dados do Nationawide Inpatient Survey dos EUA. Nos casos de BAV avançado de 3º grau, houve associação com maior morbidade e mortalidade. A fibrilação atrial pode ocorrer em até 9% dos pacientes com miocardite aguda na fase hospitalar e se associa a maior mortalidade hospitalar (RC:1,7 com IC 95% 1,1-2,7, p = 0.02); choque cardiogênico (RC: 1,9, com IC 95% 1,3-2,8, p < 0.001) e tamponamento cardíaco (RC: 5,6 com IC 95% 1.2-25.3, p = 0.002). As arritmias ventriculares, as mais associadas à probabilidade de morte súbita, podem corresponder até aproximadamente um quarto de todas as arritmias registradas em pacientes internados por miocardite, sendo a taquicardia ventricular a mais frequente. O manejo das arritmias na fase aguda deve seguir o princípio de transitoriedade do processo, e as ectopias frequentes ou taquicardias não sustentadas não devem ser tratadas por antiarrítmicos específicos, exceto o betabloqueador, quando indicado. O uso do marca-passo temporário pode ser utilizado em BAV avançados nesta fase, e a indicação de marca-passo definitivo ou cardiodesfibrilador implantado deve seguir as indicações convencionais ( ).
As arritmias cardíacas são manifestações relativamente frequentes no paciente com miocardite, podendo ocorrer em qualquer fase da doença. Os mecanismos arritmogênicos estão direta ou indiretamente relacionados ao grau de agressão inflamatória miocárdica. Na fase aguda pela agressão viral e da resposta inflamatória, temos miocitólise associada à fibrose, que promovem a hiperatividade do sistema simpático e disfunção dos canais iônicos, especialmente na regulação do cálcio, criando o substrato eletrofisiológico para geração de arritmias. Quanto maior o dano celular e o grau de comprometimento inflamatório, maior a probabilidade de ocorrência de arritmias ventriculares, sendo a reentrada o principal mecanismo arritmogênico. Um amplo espectro de bradiarritmias e taquiarritmias ocorre no contexto da miocardite. BAV, alterações da repolarização ventricular e o prolongamento do intervalo QT são achados comuns na fase aguda da doença. Fibrilação atrial e taquicardias atriais também podem estar presentes no curso da miocardite aguda ou na fase crônica. As arritmias ventriculares podem se manifestar através de extrassístoles e/ou taquicardias ventriculares. Estas podem ter um caráter monomórfico ou polimórfico e apresentarem-se de forma não sustentada ou sustentada (duração ≥30 segundos). Os sintomas variam de acordo com a forma de apresentação da arritmia, o estado hemodinâmico e o grau de disfunção ventricular esquerda, podendo se manifestar por meio de palpitações, taquicardias, síncope ou morte súbita. Os métodos diagnósticos diretos utilizados para avaliação não invasiva das arritmias são o ECG basal de 12 derivação, a eletrocardiografia ambulatorial contínua por 24 ou 48 horas (sistema Holter ) e o monitoramento de eventos (sistema Looper). O ECG é geralmente alterado nos pacientes com miocardite, porém tais achados apresentam baixa sensibilidade e especificidade. Ukena et al. relataram que a duração prolongada do complexo QRS é um preditor independente para morte cardíaca ou transplante cardíaco em pacientes com suspeita de miocardite. O prolongamento do intervalo QTc acima de 440 ms, o desvio do eixo QRS e a presença de ectopias ventriculares, presentes no curso da miocardite, não parecem ser preditores independentes de pior prognóstico. O ECG é uma ferramenta de grande utilidade na detecção de bradiarritmias e taquiarritmias que se apresentam de forma sustentada. Para documentação das arritmias que apresentam um caráter paroxístico, o monitoramento de eletrocardiografia ambulatorial pode ser utilizado. A duração do monitoramento depende da frequência dos sintomas e quanto mais esporádico mais difícil a sua documentação. Através da eletrocardiografia ambulatorial por 24 horas ( Holter ), é possível a documentação de arritmias e anormalidades da condução atrioventricular. O Holter também nos auxilia na análise da distribuição nictemeral das arritmias, do sistema nervoso autônomo e do provável mecanismo eletrofisiológico. Recomendamos a realização do Holter 24 horas na fase hospitalar para avaliação de possíveis arritmias assintomáticas e anormalidades intermitentes da condução atrioventricular ( ). O Holter também pode ser recomendado na fase crônica da miocardite como método auxiliar para estratificação de risco de morte súbita. O real papel da avaliação invasiva por meio do estudo eletrofisiológico para estratificação do risco de morte súbita é ainda ponto de investigação em pacientes com miocardites. Um dos pontos a se considerar é que a reprodutibilidade de eventos arrítmicos significativos deve variar de acordo com a etiologia e o tipo de acometimento miocárdico. Na sarcoidose cardíaca, por exemplo, encontramos alto grau de reprodutibilidade de eventos clínicos significativos com a estimulação elétrica programada, sendo esta útil na tomada de decisão. Nos pacientes que apresentaram taquicardia ventricular monomórfica não sustentada ou sustentada em algum momento da doença, importante realce tardio ou zonas de baixa voltagem ao estudo eletrofisiológico com mapeamento eletroanatômico parecem ter pior prognóstico, e estes achados podem auxiliar na estratificação do risco de morte súbita. Na ausência de dados específicos, recomenda-se a utilização cautelosa desse método de estratificação do risco de morte súbita nesses pacientes, especialmente nos assintomáticos.
As arritmias podem estar associadas à miocardite principalmente na fase aguda, mas também na fase crônica, dependendo do grau de lesão tecidual, nos quais se destacam a inflamação e a fibrose residual, mas com uma base fisiológica ampla , ( ). Podem estar presentes em 33,7% dos casos internados por miocardite, se apresentando tanto por taqui, quanto bradiarritmias e estão associados a morbidades como hipertireioidismo, idade, obesidade, IC, desequilíbrio eletrolítico e doença valvar. A preexistência de cardiomiopatias como displasia arritmogênica do VD e canalopatias preexistentes está também associada à ocorrência de arritmias durante inflamação miocárdica. , As bradiarritmias são geralmente associadas aos BAV, que podem ser de vários graus e ocorrem predominantemente na fase aguda; mesmo assim, são raras. Obongayo et al. observaram uma prevalência de 1,7% de BAV, sendo somente 1,1% de bloqueios avançados na fase intra-hospitalar de 31.760 pacientes internados com o diagnóstico de miocardite a partir do banco de dados do Nationawide Inpatient Survey dos EUA. Nos casos de BAV avançado de 3º grau, houve associação com maior morbidade e mortalidade. A fibrilação atrial pode ocorrer em até 9% dos pacientes com miocardite aguda na fase hospitalar e se associa a maior mortalidade hospitalar (RC:1,7 com IC 95% 1,1-2,7, p = 0.02); choque cardiogênico (RC: 1,9, com IC 95% 1,3-2,8, p < 0.001) e tamponamento cardíaco (RC: 5,6 com IC 95% 1.2-25.3, p = 0.002). As arritmias ventriculares, as mais associadas à probabilidade de morte súbita, podem corresponder até aproximadamente um quarto de todas as arritmias registradas em pacientes internados por miocardite, sendo a taquicardia ventricular a mais frequente. O manejo das arritmias na fase aguda deve seguir o princípio de transitoriedade do processo, e as ectopias frequentes ou taquicardias não sustentadas não devem ser tratadas por antiarrítmicos específicos, exceto o betabloqueador, quando indicado. O uso do marca-passo temporário pode ser utilizado em BAV avançados nesta fase, e a indicação de marca-passo definitivo ou cardiodesfibrilador implantado deve seguir as indicações convencionais ( ).
10.1. Marcadores de Prognóstico e Evolução A miocardite se apresenta com ampla diversidade fenotípica. Grande parte de indivíduos com miocardite aguda, que se apresentam com cardiomiopatia dilatada aguda, evoluem com melhora do quadro em poucos dias. Relatos de séries apresentam valores entre 10% e 20% de eventos cardiovasculares sérios a longo prazo e risco de recaída de 10%. Inúmeros fatores têm sido envolvidos no prognóstico, quer sejam clínicos, quer sejam laboratoriais. A manutenção da função ventricular preservada durante o quadro agudo tem sido repetidamente relacionada com melhora espontânea sem sequelas. Outras análises registram que níveis reduzidos da pressão arterial e da frequência cardíaca, síncope, disfunção sistólica do ventrículo direito, pressão arterial pulmonar elevada assim como classe funcional avançada da New York Heart Association devem ter papel importante. A etiologia também tem se mostrado valorosa no espectro prognóstico. Portadores de miocardite linfocítica aguda, que mantiveram função ventricular preservada, evoluíram com melhora espontânea e sem sequelas. Em contraposição, o Myocarditis Treatment Trial registrou que pacientes com IC e FEVE inferior a 45% apresentaram mortalidade de 56% em 4 anos. A miocardite de células gigantes e a eosinofílica evoluem de forma mais sombria. Portadores de miocardite fulminante apresentam dramático prognóstico a curto prazo; no entanto, quando sobreviventes, apresentaram melhor prognóstico que várias outras etiologias. , O ECG mostrou valor prognóstico em avaliação recente. A RM, exame de destacado valor no cenário do diagnóstico da miocardite, já apresentou utilidade com o uso da técnica do realce tardio; contudo, em mais recente publicação, não confirmou valor preditivo, seja na melhora da função ventricular ou na remodelação em avaliação a longo prazo. Apesar do avanço no diagnóstico, o prognóstico continua sendo um desafio, provavelmente por inúmeros fatores conhecidos ou não. É possível considerar as causas que variam enormemente com suas peculiaridades, apresentação clínica, envolvimento imunológico e genético, entre outras. 10.2 Seguimento Ambulatorial nas Avaliações dos Métodos Complementares O seguimento clínico acompanhado do ECG deve acontecer continuamente nos pacientes que já apresentaram o diagnóstico. Diante do valor inegável da função ventricular, exames de imagem devem ser incluídos. O ecocardiograma surge como alternativa útil e de mais fácil acesso, trazendo a informação mais relevante nesse cenário ( ).
A miocardite se apresenta com ampla diversidade fenotípica. Grande parte de indivíduos com miocardite aguda, que se apresentam com cardiomiopatia dilatada aguda, evoluem com melhora do quadro em poucos dias. Relatos de séries apresentam valores entre 10% e 20% de eventos cardiovasculares sérios a longo prazo e risco de recaída de 10%. Inúmeros fatores têm sido envolvidos no prognóstico, quer sejam clínicos, quer sejam laboratoriais. A manutenção da função ventricular preservada durante o quadro agudo tem sido repetidamente relacionada com melhora espontânea sem sequelas. Outras análises registram que níveis reduzidos da pressão arterial e da frequência cardíaca, síncope, disfunção sistólica do ventrículo direito, pressão arterial pulmonar elevada assim como classe funcional avançada da New York Heart Association devem ter papel importante. A etiologia também tem se mostrado valorosa no espectro prognóstico. Portadores de miocardite linfocítica aguda, que mantiveram função ventricular preservada, evoluíram com melhora espontânea e sem sequelas. Em contraposição, o Myocarditis Treatment Trial registrou que pacientes com IC e FEVE inferior a 45% apresentaram mortalidade de 56% em 4 anos. A miocardite de células gigantes e a eosinofílica evoluem de forma mais sombria. Portadores de miocardite fulminante apresentam dramático prognóstico a curto prazo; no entanto, quando sobreviventes, apresentaram melhor prognóstico que várias outras etiologias. , O ECG mostrou valor prognóstico em avaliação recente. A RM, exame de destacado valor no cenário do diagnóstico da miocardite, já apresentou utilidade com o uso da técnica do realce tardio; contudo, em mais recente publicação, não confirmou valor preditivo, seja na melhora da função ventricular ou na remodelação em avaliação a longo prazo. Apesar do avanço no diagnóstico, o prognóstico continua sendo um desafio, provavelmente por inúmeros fatores conhecidos ou não. É possível considerar as causas que variam enormemente com suas peculiaridades, apresentação clínica, envolvimento imunológico e genético, entre outras.
O seguimento clínico acompanhado do ECG deve acontecer continuamente nos pacientes que já apresentaram o diagnóstico. Diante do valor inegável da função ventricular, exames de imagem devem ser incluídos. O ecocardiograma surge como alternativa útil e de mais fácil acesso, trazendo a informação mais relevante nesse cenário ( ).
|
A quick glance at publications on COVID-19 and ophthalmology | b6a57055-091d-4963-90b8-72f1c374da88 | 7774146 | Ophthalmology[mh] | Older age, high fever, increased neutrophil/lymphocyte ratio, and high levels of acute-phase reactants were quoted as risk factors for ocular involvement. The direct ophthalmic involvement with the virus occurs as follicular conjunctivitis (6.3% prevalence) that occurs late in the disease course and is usually self-limiting. The frequent hand-eye contact correlated with the conjunctival congestion observed in the cohort of 535 cases of COVID-19 in Wuhan, China.
Tele ophthalmology practice was adapted well by the ophthalmic community in this pandemic. Many ophthalmology branches reported its successful use in triaging the patient (if hospital visit required) and managing the follow-up patients. Even rehabilitation services could be provided with the help from families and therapists using telemedicine. The reported concerns were the medicolegal issues like consents, right to refusal (for both the patient and the doctor), monetization, and its implications and scope of jurisdiction.
The guidelines for the resumption of services are laid for almost every ophthalmology subspecialty by several societies. Since the current evidence suggests only a low-risk of transmission through conjunctival surfaces and tears, personal protective gear and precautions in outpatients and operating rooms have been deemed sufficient for slowly returning to the new normal across the ophthalmic subspecialties. There is no valid scientific study establishing the viral tropism for ocular structures. The use of irradiated corneas, glycerol preserved corneas for transplant, use of ultraviolet rays and hydrogen peroxide vapor for decontamination of respirators, and smartphone assisted slit-lamp examination are some of the novel and innovative applications brought into ophthalmic practice by this pandemic. Ophthalmology trainees' surgical training has significantly suffered; however, there were several academic activities, including conferences and workshops facilitated by electronic media. The recent publications have catered to the need of ophthalmologists in managing their practice amidst the COVID-19 pandemic. The publications also brought to the fore the global ophthalmic community's ingenuity and its ability to stand up to mammoth challenges of the new era.
Nil.
There are no conflicts of interest.
Contains the detailed journal wise distribution of published articles
|
Characteristics of Pediatric Mild Traumatic Brain Injury and Recovery in a Concussion Clinic Population | 6aa95e41-c46f-4a9a-96ca-9f02051c1a5b | 7670312 | Pediatrics[mh] | It is estimated that more than 830 000 pediatric patients with traumatic brain injury (TBI) present to emergency departments (EDs) each year in the US. Mild TBI, including concussion, accounts for at least 75% of all TBIs reported in the US. A 2015 study of pediatric concussion in an ED cohort found that adolescents aged 12 to 17 years had a higher incidence of concussion compared with younger children, but relatively few studies address children in the 5 to 12 years age range. Pediatric patients with mild TBI present to a variety of medical settings, with a 2016 study reporting 82% of patients first seen in primary care, 11% of patients presenting to the ED, and 5.2% of patients presenting to a specialty clinic. Therefore, mild TBI incidence based solely on ED visits underestimates the number of individuals with mild TBI. A more complete understanding of the full range of needs of youth with mild TBI requires the study of all mechanisms across all age ranges and clinical settings, including the outpatient clinic population. Although most children with mild TBI recover relatively rapidly, 10% to 30% have persistent postconcussion symptoms (PPCS) lasting longer than 4 to 12 weeks, , , and such prolonged recovery can interfere with academics and quality of life. A large population-based study of pediatric TBI in Sweden concluded that mild TBI in youth was associated with adverse outcomes in adulthood, and recurrent TBI and age-at-injury were important factors associated with outcome. These pediatric patients with mild TBI with prolonged recovery are of major concern, as they experience greater disability and require more medical resources. Therefore, better defining the problem of mild TBI, PPCS, and longer-term outcomes in children and adolescents is an important public health challenge and well suited to large, prospective cohorts recruited from pediatric mild TBI clinics. Factors that may increase risk for PPCS and prolonged recovery include age, sex, and premorbid conditions. However, much of the research regarding mild TBI recovery comes from emergency or acute care cohorts using relatively short outcome windows (ie, ≤1 month). , , Currently, there is no widely accepted definition or time interval for PPCS in children, further challenging clinicians’ ability to identify and treat these patients. Moreover, factors associated with PPCS and prolonged recovery may differ in different clinical populations (eg, athletes vs nonathletes, adolescents vs children), clinical settings (eg, specialty concussion clinics having a higher proportion of subacute or chronic patients with PPCS than primary care clinics) and time intervals (eg, 1 month vs ≥3 months after injury). The Four Corners Youth Consortium (4CYC) was formed through collaboration among academic institutions with expertise and multidisciplinary programs focused on pediatric mild TBI clinical care and research. The 4CYC is unique in that we are focused on the population of youth seen in subspecialty mild TBI and concussion clinics and have been able to capture longer follow-up after injury. This study examined trajectories of symptom recovery in patients presenting to pediatric mild TBI clinics. We hypothesized age, sex, and premorbid factors are associated with mild TBI recovery and persistence of symptoms in this specialty clinic population.
This study was reviewed and approved by a single institutional review board at the University of Utah. Informed consent was obtained from the parent, guardian, or patient if the patient was aged 18 years and assent was obtained from children aged 5 to 17.99 years in accordance with site-specified institutional review board compliance regulations. Consent was preferentially obtained in person but could optionally be obtained electronically through email and telephone communication. For patients who did not consent to be contacted for follow-up, demographic and initial clinical data were extracted from the electronic health record under an institutional review board–approved waiver of consent. Waiver of consent was approved to maximize completeness of the data set and permit extraction of deidentified medical information, given the low-risk nature of this registry. This study follows the Strengthening the Reporting of Observational Studies in Epidemiology ( STROBE ) reporting guideline. The 4CYC Concussion Registry The 4CYC is a multicenter collaborative that aims to build a comprehensive evidence base to promote behaviors that improve brain health among youth. Three contributing member sites, Children’s National Hospital, Seattle Children’s Hospital–University of Washington, and UCLA Mattel Children’s Hospital, have built a prospective, observational registry of pediatric mild TBI. The University of Utah Data Coordinating Center joined the 4CYC investigators and constructed a database using National Institute of Neurological Disorders and Stroke common data elements. Enrollment and Consent From December 2017 to July 2019, the registry enrolled pediatric patients aged 5 to 18.99 years presenting with mild TBI within 8 weeks of injury. Patients who enrolled at age 18 years were followed through recovery even if recovery transcended their 18th year. Patients were excluded if the patient or their parent was unable to read or sign the consent document, or if the patient had an initial Glasgow Coma Scale score less than 13 or a penetrating injury. Clinical Measures The study used National Institute of Neurological Disorders and Stroke common data elements from the guidelines for pediatric TBI, mild TBI, and sports concussion. , Sites collected demographic characteristics, injury details, medical history, and clinical neurological assessments directly from the patient or parent and any available prior medical records as part of regular clinic care. The patient- or parent-reported past medical history was self-reported during an interview conducted by a licensed health care practitioner and included the patient’s preexisting comorbidities, including attention-deficit/hyperactivity disorder, anxiety, depression, learning disabilities, migraines, sleep problems, and seizures or epilepsy. This information was gathered as part of standard clinical care. Data were extracted from the electronic health record into a REDCap database (Vanderbilt University). Follow-up and Patient Recovery Contact information was collected from patients who consented to receive follow-up surveys. Surveys were administered directly from REDCap by text, email, or telephone call, as preferred by the patient’s parent. The follow-up was performed every 3 months following the date of injury until the parent indicated the patient had fully recovered from the injury. Recovery was defined as “all of the symptoms that were caused BY THE INJURY have GONE AWAY and DO NOT RETURN when doing activities (physical or mental), such as exercise or studying for school.” Some patients followed-up in clinic as part of the standard clinical care; for these patients, the date of recovery was determined by both clinical examination and interview at the time of follow-up. For patients with both parent- and clinician-reported recovery, the clinician-reported date was used for analysis. Statistical Analysis Age groups were designated as preadolescent (ie, age 5-12.99 years) and adolescent (ie, age 13-18.99 years). Patient and injury characteristics were summarized by age group and sex using frequencies and percentages for categorical variables or median and interquartile range for continuous variables. Differences in patient and injury characteristics between the younger and older age groups were tested using Fisher exact test, with the exception of the number of comorbidities, for which the Kruskal-Wallis test was used. Kaplan-Meier curves were used to compare time to recovery by age groups, sex, number of comorbidities, prior TBI, migraine history, and history of emotional distress (defined as anxiety and/or depression). Patients without a documented date of recovery were considered censored at the date of the last known clinic visit or follow-up survey. Log-rank tests were used to compare recovery curves. All hypothesis tests were conducted against a 2-sided alternative. P values were considered statistically significant when less than .05. Analyses were performed using SAS statistical software version 9.4 (SAS Institute). Data were analyzed from February 2019 to April 2020.
The 4CYC is a multicenter collaborative that aims to build a comprehensive evidence base to promote behaviors that improve brain health among youth. Three contributing member sites, Children’s National Hospital, Seattle Children’s Hospital–University of Washington, and UCLA Mattel Children’s Hospital, have built a prospective, observational registry of pediatric mild TBI. The University of Utah Data Coordinating Center joined the 4CYC investigators and constructed a database using National Institute of Neurological Disorders and Stroke common data elements.
From December 2017 to July 2019, the registry enrolled pediatric patients aged 5 to 18.99 years presenting with mild TBI within 8 weeks of injury. Patients who enrolled at age 18 years were followed through recovery even if recovery transcended their 18th year. Patients were excluded if the patient or their parent was unable to read or sign the consent document, or if the patient had an initial Glasgow Coma Scale score less than 13 or a penetrating injury.
The study used National Institute of Neurological Disorders and Stroke common data elements from the guidelines for pediatric TBI, mild TBI, and sports concussion. , Sites collected demographic characteristics, injury details, medical history, and clinical neurological assessments directly from the patient or parent and any available prior medical records as part of regular clinic care. The patient- or parent-reported past medical history was self-reported during an interview conducted by a licensed health care practitioner and included the patient’s preexisting comorbidities, including attention-deficit/hyperactivity disorder, anxiety, depression, learning disabilities, migraines, sleep problems, and seizures or epilepsy. This information was gathered as part of standard clinical care. Data were extracted from the electronic health record into a REDCap database (Vanderbilt University).
Contact information was collected from patients who consented to receive follow-up surveys. Surveys were administered directly from REDCap by text, email, or telephone call, as preferred by the patient’s parent. The follow-up was performed every 3 months following the date of injury until the parent indicated the patient had fully recovered from the injury. Recovery was defined as “all of the symptoms that were caused BY THE INJURY have GONE AWAY and DO NOT RETURN when doing activities (physical or mental), such as exercise or studying for school.” Some patients followed-up in clinic as part of the standard clinical care; for these patients, the date of recovery was determined by both clinical examination and interview at the time of follow-up. For patients with both parent- and clinician-reported recovery, the clinician-reported date was used for analysis.
Age groups were designated as preadolescent (ie, age 5-12.99 years) and adolescent (ie, age 13-18.99 years). Patient and injury characteristics were summarized by age group and sex using frequencies and percentages for categorical variables or median and interquartile range for continuous variables. Differences in patient and injury characteristics between the younger and older age groups were tested using Fisher exact test, with the exception of the number of comorbidities, for which the Kruskal-Wallis test was used. Kaplan-Meier curves were used to compare time to recovery by age groups, sex, number of comorbidities, prior TBI, migraine history, and history of emotional distress (defined as anxiety and/or depression). Patients without a documented date of recovery were considered censored at the date of the last known clinic visit or follow-up survey. Log-rank tests were used to compare recovery curves. All hypothesis tests were conducted against a 2-sided alternative. P values were considered statistically significant when less than .05. Analyses were performed using SAS statistical software version 9.4 (SAS Institute). Data were analyzed from February 2019 to April 2020.
Demographic Characteristics A total of 600 patients were enrolled in the study, among whom 324 (54.0%) were female and 435 (72.5%) were adolescents. Overall demographic characteristics of the 4CYC cohort and breakdown by sex are summarized in . Compared with boys and men, a greater proportion of girls and women were adolescents (187 patients [67.8%] vs 248 patients [76.5%]; P = .02). Most patients were non-Hispanic (475 patients [87.8%]) and White (375 patients [75.6%]). Medicaid or state Child Health Insurance Plan covered only 91 patients (15.3%). Medicaid covered more of the boys and men in our cohort compared with girls and women. Preinjury, most patients were in regular school. Parental education was high, with most parents having a college degree or above, and almost half having a masters or doctoral degree . A total of 372 parents (62.0%) consented to follow-up. Analysis between those with follow-up and those who did not consent for follow-up showed no significant differences (eTable 1 in the ). Of patients who consented for follow-up, 293 (78.8%) actually responded to follow-up surveys. Patients who responded to follow-up were more likely to have comorbidities than those who did not; however, they did not differ otherwise (eTable 2 in the ). Patient preinjury medical history is reported by sex in and by age group in . Significantly more girls and women reported preexisting anxiety than did boys and men (80 patients [26.7%] vs 46 patients [18.7%]; P = .03). No differences were found between sexes in the preinjury diagnoses of attention-deficit/hyperactivity disorder, migraine, depression, or learning disabilities as well as a count of total comorbidities. Adolescents, compared with preadolescents, were more likely to report a diagnosis of migraines (82 patients [20.9%] vs 15 patients [10.9%]; P = .01) and a history of prior concussion (234 patients [54.8%] vs 15 patients [10.9%]; P < .001). Adolescents had more total comorbidities than preadolescents (eg, ≥3 comorbidities: 53 patients [15.1%] vs 15 patients [12.5%]; P = .008). Injury Characteristics Patients presented to a clinic a median (interquartile range) of 16.0 (8.0-29.0) days after injury. Injury characteristics are presented by sex in eTable 3 in the and by age group in eTable 4 in the . Girls and women were more likely to present with stable or worsening symptoms over the course of the first week, while boys and men were more likely than girls and women to have improving symptoms within the first week. There were no other significant sex differences in injury mechanism or characteristics. Acute injury severity surrogates, including neuroimaging anomalies, amnesia, and loss of consciousness, are reported in eTable 5 in the . In this cohort, relatively few patients underwent acute or subacute neuroimaging, consistent with existing clinical guidelines for pediatric mild TBI. Most injuries (452 injuries [75.3%]) came from sports or recreation, followed by being struck by or against an object or person and falling (nonsport). When examining the cause of injury by age, adolescents were more likely to suffer a sports- or recreation-related injury than preadolescents. Preadolescents were more likely to sustain a nonsport-related accident or be struck by an object or person than were adolescents. There were no other age group differences in injury characteristics (eTable 4 in the ). Recovery Girls and women recovered at a slower rate than boys and men (persistent symptoms after injury: week 4, 217 patients [81.6%] vs 156 patients [71.2%]; week 8, 146 patients [58.9%] vs 89 patients [44.3%]; week 12, 103 patients [42.6%] vs 58 patients [30.2%]; P = .01) . There was no significant difference in persistent symptoms in adolescents vs preadolescents . Patients who reported preinjury history of emotional distress (ie, anxiety or depression) recovered more slowly than those without (persistent symptoms after injury: week 4, 89 patients [80.9%] vs 251 patients [75.6%]; week 8, 59 patients [57.8%] vs 156 patients [50.5%]; week 12, 48 patients [48.0%] vs 99 patients [33.3%]; P = .009) ( A). Patients with a migraine history had more persistent symptoms than those without migraine (persistent symptoms after injury: week 4, 62 patients [87.3%] vs 266 patients [73.9%]; week 8, 42 patients [67.7%] vs 165 patients [49.0%]; week 12, 34 patients [55.7%] vs 108 patients [33.2%]; P = .001) ( B). Neither overall burden of comorbidities nor history of prior concussion showed a significant association with recovery. One post hoc analysis showed prior emotional distress or migraine was associated with slower recovery irrespective of sex (eTable 6 and eTable 7 in the ). Acute injury severity surrogates of neuroimaging with anomalies, amnesia, or loss of consciousness were not associated with prolonged recovery.
A total of 600 patients were enrolled in the study, among whom 324 (54.0%) were female and 435 (72.5%) were adolescents. Overall demographic characteristics of the 4CYC cohort and breakdown by sex are summarized in . Compared with boys and men, a greater proportion of girls and women were adolescents (187 patients [67.8%] vs 248 patients [76.5%]; P = .02). Most patients were non-Hispanic (475 patients [87.8%]) and White (375 patients [75.6%]). Medicaid or state Child Health Insurance Plan covered only 91 patients (15.3%). Medicaid covered more of the boys and men in our cohort compared with girls and women. Preinjury, most patients were in regular school. Parental education was high, with most parents having a college degree or above, and almost half having a masters or doctoral degree . A total of 372 parents (62.0%) consented to follow-up. Analysis between those with follow-up and those who did not consent for follow-up showed no significant differences (eTable 1 in the ). Of patients who consented for follow-up, 293 (78.8%) actually responded to follow-up surveys. Patients who responded to follow-up were more likely to have comorbidities than those who did not; however, they did not differ otherwise (eTable 2 in the ). Patient preinjury medical history is reported by sex in and by age group in . Significantly more girls and women reported preexisting anxiety than did boys and men (80 patients [26.7%] vs 46 patients [18.7%]; P = .03). No differences were found between sexes in the preinjury diagnoses of attention-deficit/hyperactivity disorder, migraine, depression, or learning disabilities as well as a count of total comorbidities. Adolescents, compared with preadolescents, were more likely to report a diagnosis of migraines (82 patients [20.9%] vs 15 patients [10.9%]; P = .01) and a history of prior concussion (234 patients [54.8%] vs 15 patients [10.9%]; P < .001). Adolescents had more total comorbidities than preadolescents (eg, ≥3 comorbidities: 53 patients [15.1%] vs 15 patients [12.5%]; P = .008).
Patients presented to a clinic a median (interquartile range) of 16.0 (8.0-29.0) days after injury. Injury characteristics are presented by sex in eTable 3 in the and by age group in eTable 4 in the . Girls and women were more likely to present with stable or worsening symptoms over the course of the first week, while boys and men were more likely than girls and women to have improving symptoms within the first week. There were no other significant sex differences in injury mechanism or characteristics. Acute injury severity surrogates, including neuroimaging anomalies, amnesia, and loss of consciousness, are reported in eTable 5 in the . In this cohort, relatively few patients underwent acute or subacute neuroimaging, consistent with existing clinical guidelines for pediatric mild TBI. Most injuries (452 injuries [75.3%]) came from sports or recreation, followed by being struck by or against an object or person and falling (nonsport). When examining the cause of injury by age, adolescents were more likely to suffer a sports- or recreation-related injury than preadolescents. Preadolescents were more likely to sustain a nonsport-related accident or be struck by an object or person than were adolescents. There were no other age group differences in injury characteristics (eTable 4 in the ).
Girls and women recovered at a slower rate than boys and men (persistent symptoms after injury: week 4, 217 patients [81.6%] vs 156 patients [71.2%]; week 8, 146 patients [58.9%] vs 89 patients [44.3%]; week 12, 103 patients [42.6%] vs 58 patients [30.2%]; P = .01) . There was no significant difference in persistent symptoms in adolescents vs preadolescents . Patients who reported preinjury history of emotional distress (ie, anxiety or depression) recovered more slowly than those without (persistent symptoms after injury: week 4, 89 patients [80.9%] vs 251 patients [75.6%]; week 8, 59 patients [57.8%] vs 156 patients [50.5%]; week 12, 48 patients [48.0%] vs 99 patients [33.3%]; P = .009) ( A). Patients with a migraine history had more persistent symptoms than those without migraine (persistent symptoms after injury: week 4, 62 patients [87.3%] vs 266 patients [73.9%]; week 8, 42 patients [67.7%] vs 165 patients [49.0%]; week 12, 34 patients [55.7%] vs 108 patients [33.2%]; P = .001) ( B). Neither overall burden of comorbidities nor history of prior concussion showed a significant association with recovery. One post hoc analysis showed prior emotional distress or migraine was associated with slower recovery irrespective of sex (eTable 6 and eTable 7 in the ). Acute injury severity surrogates of neuroimaging with anomalies, amnesia, or loss of consciousness were not associated with prolonged recovery.
This prospective multicenter cohort study describes mild TBI recovery outcomes of a large cohort of patients presenting to subspecialty clinics, examining sex and age associations of mild TBI recovery profiles. This 4CYC study further demonstrates the ability to obtain a comprehensive and clinically useful pediatric mild TBI data set in the course of a usual multidisciplinary clinic visit. This 4CYC cohort study examines the recovery characteristics of an important group of patients with mild TBI presenting to subspecialty care. These youths represent a subgroup at greater risk of experiencing prolonged recovery and PPCS than youths presenting in more acute settings, with more than 70% of youths in this study having symptoms lasting longer than 1 month and 40% of youths still symptomatic at 3 months. A better understanding of this group’s characteristics is a major public health priority for providing improved prognostic estimates, more accurate assessment, and timely intervention. Studying children and adolescents from outpatient subspecialty concussion clinics captures a different sample of patients than those from the ED , or athletics. , A 2013 multisite study of mild TBI based in the ED recruited patients with more severe or highly symptomatic initial injuries, prompting early presentation. Conversely, a 2010 study of youth sport-related mild TBI acquired data through school-based athletic trainers, resulting in patients with injuries who often do not present for care at an outpatient concussion clinic or ED. Studies in ED patients and youths with sports-related mild TBI have reported a relatively rapid recovery in most individuals, with a much smaller proportion of patients reporting ongoing symptoms at 1 month , or 3 months. , , , , , Another important characteristic of the 4CYC cohort was that a substantial proportion of patients were preadolescent, while many earlier studies, particularly in sports-related TBI, have focused primarily on high school–aged youth. , , , The inclusion of younger children with mild TBI can help us determine age-specific differences in symptom presentation and recovery. While demographic studies of pediatric TBI have shown a 2-to-1 predominance of boys, our 4CYC cohort was comprised of almost equal numbers of boys and girls, with an increasing proportion of girls and women (>57%) in the adolescent age range, a factor that this study suggests is essential for prognostic estimates. Our sample of patients treated in specialty mild TBI clinics had higher socioeconomic status than the general population, with low rates of Medicaid insurance and high rates of parental education, similar to what has been reported in a study by Copley et al. A more socioeconomically balanced sample is necessary in future work to ensure that the recovery characteristics of youth with lesser financial and educational resources are also well defined. Association of Sex and Age An age by sex difference was evident in our sample, with a larger proportion of girls and women in the adolescent group. Adolescent girls and women have been shown in ED and sports studies to be at higher risk for PPCS. , Prolonged recovery was also seen in girls and women in this 4CYC cohort. Many factors have been ascribed to the sex associations of concussion risk and recovery, including neck strength, hormonal differences, comorbidities with a sex predominance, symptom reporting, and social biases. While girls and women took longer to recover in this concussion clinic cohort, differences could not entirely be ascribed to presence of selected comorbidities, suggesting other underlying biological or social determinants. It is known that onset of migraine, anxiety, and depression is typically in adolescence. However, in the 4CYC population, only migraine and history of prior concussion showed significant age differences. Prior studies have reported mixed results in determining whether the immature brain is more susceptible or more resilient to TBI and concussion, with a 2018 study showing younger children to be more susceptible and other studies finding adolescence as a period of greater risk for developing prolonged problems. , While there was significant association of sex with recovery time, there was no significant difference in recovery time in adolescents compared with preadolescents. Comorbidities and Recovery The interaction between sex and select comorbidities has often been implicated in mild TBI recovery, including mental health problems, such as preexisting attention-deficit/hyperactivity disorder, learning disability, anxiety, depression, sleep problems, or migraines, or prior history of concussion. , , , , Although the 4CYC study did not find an association with all of these factors, girls and women were more likely than boys and men to have a history of anxiety. The Centers for Disease Control and Prevention reports rates of clinician-diagnosed anxiety and depression in children aged 3 to 17 years without mild TBI at 7% for anxiety and 3% for depression. However, other epidemiological studies show much higher rates of anxiety (30%) and mood disorders (11%) in adolescents aged 13 to 17 years , , , with a higher prevalence in girls. The preinjury rates reported in this study were comparable for age-reported rates of these common comorbidities. The rate of migraine in our cohort may be higher than in the general population. , , In a post hoc analysis, our cohort demonstrated higher rates of migraine in adolescents and significant interaction between sex and age, with adolescent girls and women having the highest rate. The comorbidities of emotional distress (defined here as depression or anxiety) and migraine were both associated with longer recoveries, which mirrors findings in other cohorts. , We found that girls and women were more likely to report unchanged or worsening symptoms over the first week compared with boys and men, who were more likely to report improving symptoms over this time window. While girls and women were at greater risk for prolonged recovery, the association of comorbid emotional distress or migraine to recovery were independent of sex. This suggests that comorbidities do not account entirely for the sex differences in symptoms and recovery seen at longer time windows after concussion and that some diagnoses, like migraine, anxiety, and depression, may have underlying biological characteristics that prolong symptom recovery in both sexes. Because these conditions are treatable, early identification may provide means to accelerate recovery. This may have important implications for initial assessment and potential interventions to prevent or treat PPCS. Limitations This study has some limitations. While data was collected from 3 different institutions and health care settings, our cohort contained a high proportion of White, well-insured patients with highly educated parents. This suggests that our population may be less generalizable to the general population, and greater outreach is needed, as all 3 institutions treat patients regardless of insurance status. Nonetheless, this is a large prospective study of concussion and recovery in a subspecialty clinic population. The 4CYC is a unique consortium of multidisciplinary centers, which differs from many earlier studies using ED, primary care, or sports concussion cohorts. This limits the acute injury severity details available; however, acute injury severity has been shown to be a weak estimator of prolonged recovery. Visits outside of the 4CYC specialty clinics were not captured, limiting generalization. Collecting recovery data for concussion is a challenge. A large proportion of patients with mild TBI recover over time and may not return for follow up. Our study addressed this challenge by disseminating surveys to the patient’s parents every 3 months after the injury date to capture a recovery date without the need for a follow-up visit. We had good response rates for the follow-up survey. The only difference between the group who responded to follow-up and those who did not was that the group who responded to follow-up had fewer comorbidities. Most recovery times were determined by the clinician at a follow-up visit. While different factors might influence parent report of recovery, these data were collected prospectively with a uniform definition of complete recovery to minimize potential bias. Additionally, the comorbidity data in this study were reported by the parent and patient in medical interviews by a licensed clinician as part of normal clinical care but not necessarily independently diagnosed by the clinician.
An age by sex difference was evident in our sample, with a larger proportion of girls and women in the adolescent group. Adolescent girls and women have been shown in ED and sports studies to be at higher risk for PPCS. , Prolonged recovery was also seen in girls and women in this 4CYC cohort. Many factors have been ascribed to the sex associations of concussion risk and recovery, including neck strength, hormonal differences, comorbidities with a sex predominance, symptom reporting, and social biases. While girls and women took longer to recover in this concussion clinic cohort, differences could not entirely be ascribed to presence of selected comorbidities, suggesting other underlying biological or social determinants. It is known that onset of migraine, anxiety, and depression is typically in adolescence. However, in the 4CYC population, only migraine and history of prior concussion showed significant age differences. Prior studies have reported mixed results in determining whether the immature brain is more susceptible or more resilient to TBI and concussion, with a 2018 study showing younger children to be more susceptible and other studies finding adolescence as a period of greater risk for developing prolonged problems. , While there was significant association of sex with recovery time, there was no significant difference in recovery time in adolescents compared with preadolescents.
The interaction between sex and select comorbidities has often been implicated in mild TBI recovery, including mental health problems, such as preexisting attention-deficit/hyperactivity disorder, learning disability, anxiety, depression, sleep problems, or migraines, or prior history of concussion. , , , , Although the 4CYC study did not find an association with all of these factors, girls and women were more likely than boys and men to have a history of anxiety. The Centers for Disease Control and Prevention reports rates of clinician-diagnosed anxiety and depression in children aged 3 to 17 years without mild TBI at 7% for anxiety and 3% for depression. However, other epidemiological studies show much higher rates of anxiety (30%) and mood disorders (11%) in adolescents aged 13 to 17 years , , , with a higher prevalence in girls. The preinjury rates reported in this study were comparable for age-reported rates of these common comorbidities. The rate of migraine in our cohort may be higher than in the general population. , , In a post hoc analysis, our cohort demonstrated higher rates of migraine in adolescents and significant interaction between sex and age, with adolescent girls and women having the highest rate. The comorbidities of emotional distress (defined here as depression or anxiety) and migraine were both associated with longer recoveries, which mirrors findings in other cohorts. , We found that girls and women were more likely to report unchanged or worsening symptoms over the first week compared with boys and men, who were more likely to report improving symptoms over this time window. While girls and women were at greater risk for prolonged recovery, the association of comorbid emotional distress or migraine to recovery were independent of sex. This suggests that comorbidities do not account entirely for the sex differences in symptoms and recovery seen at longer time windows after concussion and that some diagnoses, like migraine, anxiety, and depression, may have underlying biological characteristics that prolong symptom recovery in both sexes. Because these conditions are treatable, early identification may provide means to accelerate recovery. This may have important implications for initial assessment and potential interventions to prevent or treat PPCS.
This study has some limitations. While data was collected from 3 different institutions and health care settings, our cohort contained a high proportion of White, well-insured patients with highly educated parents. This suggests that our population may be less generalizable to the general population, and greater outreach is needed, as all 3 institutions treat patients regardless of insurance status. Nonetheless, this is a large prospective study of concussion and recovery in a subspecialty clinic population. The 4CYC is a unique consortium of multidisciplinary centers, which differs from many earlier studies using ED, primary care, or sports concussion cohorts. This limits the acute injury severity details available; however, acute injury severity has been shown to be a weak estimator of prolonged recovery. Visits outside of the 4CYC specialty clinics were not captured, limiting generalization. Collecting recovery data for concussion is a challenge. A large proportion of patients with mild TBI recover over time and may not return for follow up. Our study addressed this challenge by disseminating surveys to the patient’s parents every 3 months after the injury date to capture a recovery date without the need for a follow-up visit. We had good response rates for the follow-up survey. The only difference between the group who responded to follow-up and those who did not was that the group who responded to follow-up had fewer comorbidities. Most recovery times were determined by the clinician at a follow-up visit. While different factors might influence parent report of recovery, these data were collected prospectively with a uniform definition of complete recovery to minimize potential bias. Additionally, the comorbidity data in this study were reported by the parent and patient in medical interviews by a licensed clinician as part of normal clinical care but not necessarily independently diagnosed by the clinician.
The 4CYC is a multicenter group organized to prospectively study the subspecialty clinic presentation and recovery of a pediatric patients with mild TBI. A substantial proportion of patients in this cohort study experienced prolonged recovery. Sex differences in recovery time were observed, with girls and women taking longer to recover than boys and men. Patients reporting comorbidities of emotional distress (ie, anxiety or depression) and migraine recovered more slowly, independent of sex. Understanding factors associated with prolonged recovery and PPCS in pediatric patients with mild TBI is essential to accurate prognostic estimates and to identify phenotypes for which specific therapeutic interventions can be applied more effectively.
|
Artificial intelligence for advance requesting of immunohistochemistry in diagnostically uncertain prostate biopsies | 91057dbe-e4e3-4fed-8533-a56cbc0d4652 | 8376647 | Pathology[mh] | Prostate cancer (PCa) is the most common malignancy in men worldwide and biopsies with suspected prostate adenocarcinoma contribute a significant proportion of the workload for surgical pathology centres. In many parts of the world, demands on pathology services are increasing and staff numbers are falling. In the UK, for example, a 2018 survey by the Royal College of Pathologists found that only 3% of surgical pathology departments have sufficient senior medical staffing and around a quarter of the workforce are moving towards retirement. The United Kingdom (UK) National Health Service (NHS) spends an estimated £27 million ($34 million) on locum and private services to cover this lack in service provision . Over 60,000 prostate biopsies are carried out in the UK per year and over one million in the United States of America . With some prostate biopsy cases being allocated over an hour for reporting under proposed workload guidelines, this represents a significant workload burden . The potential benefits of digital pathology (DP) and artificial intelligence (AI) have been well described and it is clear that there could be much to gain from the introduction of workflow-based AI tools that do not affect established decision-making in supporting prostate biopsy reporting. While a number of tools exist for automated prostate biopsy screening, detecting, and grading of tumours, some with regulatory clearance for diagnostic use , uptake of such tools remains relatively limited thus far. A DP workflow is needed to enable AI and with increasing numbers of deployments in cellular pathology laboratories worldwide, the pace of uptake of AI should increase accordingly, although challenges still remain in their development and deployment . In a digital workflow, AI can be used to assist pathologists as they screen prostate biopsy slides ultimately looking to confirm or exclude malignancy. An important adjunct to diagnosing PCa is the request of immunohistochemistry (IHC) for evaluating suspicious foci. An unmet need is the ability to triage slides, without waiting for a pathologist to review the case, identifying which cases cannot be signed out by review of Hematoxylin & Eosin (H&E) alone and need IHC. If such automated requesting could be achieved, the workflow could be significantly streamlined. To diagnose PCa, pathologists search for a number of characteristic visual cues until enough features are found for confidently diagnosing malignancy. These features can be architectural and cytological. For instance, in acinar adenocarcinoma glands are infiltrative, often small in size compared to benign ones and crowded together. Cytologically, there are usually larger nuclei with one or more prominent nucleoli, often presenting perinucleolar clearing . In a proportion of cases, the prostate epithelium presents some of the features described above, but not to an extent that can lead to a convincing cancer diagnosis by morphology alone, or the features are morphologically convincing, but due to small size of the lesion, IHC would be required for confirmation. As a number of benign mimics of PCa and conversely deceptively bland variants of PCa exist, IHC is often required . Some examples of such unclear morphology include Prostatic Intraepitheial Neoplasia (PIN) with smaller glands that could represent early invasion (sometimes known as ‘PIN Atyp’), areas of atrophy that are probably benign or areas that are suspicious of cancer, but too small for definitive diagnosis, i.e. atypical small acinar proliferation (ASAP). The proportion of cases requiring IHC varies across institutions, ranging from 25 to 50% of total cases in some reports . Clinical guidelines recommend the use of basal cell marker IHC to detect loss of basal cells in the epithelial tissue. The absence of basal cells is the hallmark of malignant prostatic glands and IHC is highly effective at reducing diagnostic uncertainty . The main IHC markers recommended by the International Society of Urologic Pathology for routine diagnostic practice include CK5/6, 34BE12, P63 or a combination of basal markers and AMACR in a “cocktail” stain . Examples of prostate biopsies stained with H&E and CK5 are shown in Fig. . Not every gland that lacks basal cells is malignant. For example in adenosis as few as 10% of the glands can show basal cells . Thus the decision to request IHC is made on a focus or area-based level , and may be based on one or a number of foci of interest where the morphology is ill-defined. The request of IHC results in necessary delays to a case. Figure illustrates the routine workflow for prostate biopsies common to most centres. The pathologist must first find time to review the morphology to make the decision and then put the case on hold while the IHC is performed. The time for IHC to be performed would vary across laboratories but is usually one to 3 days. When the pathologist reviews the case for further reporting sessions, there is inherent inefficiency and time wasted in refamiliarising oneself with the case. Our own experience confirms that IHC requesting increases turnaround time and reporting time for PCa and half of the delay due to IHC requests is incurred between the time the case is accepted by the pathologist and IHC is requested. Here we design and validate an AI tool for automating the decision of requesting IHC for prostate biopsy cases. The study setting is one of the sites in the PathLAKE consortium, one of the UK Government’s AI Centres of Excellence. Several studies have demonstrated that novel computer vision algorithms based on high dimensional function optimisation called Deep Learning (DL) can extract visual features that encode clinically relevant morphological patterns from images, such as the Gleason score . We build on these developments and demonstrate the training and validation of a DL system to individuate regions of prostate biopsy whole slide images (WSIs) associated with diagnostic uncertainty. We then use the visual features extracted by the algorithm to train a Gradient Boosted Trees classifier for predicting whether IHC is required to diagnose a PCa case. In this study, we demonstrate an AI tool which can trigger IHC requests from the H&E slides without the need for a pathologist to review the case first to make that decision. The pathologist then only needs to view the case once with all of the available stains and necessary delays to IHC requests are reduced. We describe the potential to expedite the prostate biopsy workflow, reducing inefficiency and ultimately reducing time to sign out, enabling results to patients and treating clinicians in a quicker time frame, as we propose in Fig. . Setting The study was undertaken in a large academic teaching hospital in the UK (tertiary referral centre) with specialist urological pathology reporting and which processes 750–1100 prostate biopsies per year. The cellular pathology laboratory achieved the milestone of scanning 100% of the surgical histology workload in September 2020 with pathologists validated for full DP reporting. Three specialist uropathologists (authors CV, LB, RTC) were involved in the development of the tool, two with greater than 10 years post-specialist registration experience, and one with 2 years post-specialist registration experience. Retrospective audits In order to understand baseline rates of IHC request and potential workflow implications of the tool, all prostate biopsy cases were audited over a 12-month period, from August 2018 (before the introduction of DP) to August 2019. The audit collected data on the case types (transrectal ultrasound guided biopsies or systematic transperineal biopsies), number of biopsies, turnaround times, extra work ordering, IHC requests and final diagnosis. To capture actual pathologist reporting times with and without IHC and necessary delays due to IHC request, a prospective audit of consecutive prostate biopsy cases reported by three specialist urological pathologists (CV, LB, RTC) during the period September 2019 to March 2020 was undertaken. For all cases, the date the case was received, the date the case was reviewed as H&E alone (reporting session 1), the date IHC was requested, the date of further reporting sessions where the case was reviewed with IHC (reporting session 2) and the date the case was signed out were recorded. Using stop-watches the following times were recorded: (1) length of time for initial slide review (H&E only) and make notes, (2) time to organise and make IHC request (3) if IHC-requested, time to review case again with IHC, and (4) typing report. Modelling of potential time savings In order to model potential time savings with upfront IHC ordering, we compared the mean turnaround times (date received to sign out) and pathologist reporting times (time for pathologists to examine and write report) for both IHC-requested and non IHC-requested cases. We assumed IHC would be performed shortly after H&Es were ready (e.g. process started within 3 h) and not lead to any significant delays. Reporting time for IHC-requested cases was divided into two distinct sessions: during the first session, the pathologist examines the H&E slides and decides whether the case requires IHC. In the second session, the pathologist examines H&E and IHC slides together to make a diagnosis. The time savings that could be achieved by having IHC at the same time as H&E for reporting and thus removing duplication of effort is a complex one, including several factors in the decision-making process. We calculated this for our laboratory in two ways: firstly, we assumed that the reporting time for cases with advance IHC requesting would be shortened to the reporting time for non IHC-requested cases. The second estimation method consisted in modelling the different tasks and factors impinging on reporting time individually. We compared the two estimates to approximate the benefits in reporting time. Prospective cohort curation The study was conducted under the Pathology image data Lake for Analytics, Knowledge and Education (PathLAKE) research ethics committee approval (reference 19/SC/0363) and Oxford Radcliffe Biobank research ethics committee approval (reference 19/SC/0173). We created a WSI prostate biopsy training and testing set for the proposed IHC requesting tool from routine diagnostic cases. Prostate biopsies in which IHC for basal cell markers was requested during the period September 2019 to March 2020 were identified prospectively for study inclusion. Biopsies where IHC was not ordered before sign-out were excluded. Biopsies that could not be or had not been digitised were excluded. All biopsies were reported by one of three specialist uropathologists at our tertiary centre, using a mix of primary DP reporting (Philips IntelliSite Pathologist Suite, Koninklijke Philips N.V, Amsterdam, Netherlands) and traditional light microscopy/glass slide reporting. All cases were digitally scanned on a Philips IntelliSite Ultra Fast Scanner (version 1.8.6614,20180906_R51, Koninklijke Philips N.V, Amsterdam, Netherlands). Classifying ambiguous foci In order to understand the reasoning behind pathologist based IHC requests and identify categories to be modelled by machine learning, we devised a classification system of eight types of ambiguous prostate gland foci that would prompt IHC requests. These ‘reasons’ were based on the pathologists’ experience and were devised to include a representative range of the most common reasons for IHC ordering to confirm or exclude PCa. These are summarised in Table with examples in Fig. . Training data All cases which had IHC requested by three reporting pathologists (CV, LB, RTC) as part of the diagnostic process during the period of the prospective study were included in training. WSIs were de-identified using the Philips De-ID tool [version 1.1.5, Philips Digital Pathology Solutions Document DP-174226] and imported for annotation onto our in-house annotation platform, Annotation of Image Data by Assignments (AIDA) . Each case was annotated by one of the pathologists only. All foci that prompted IHC ordering were included in the training dataset. In total, 299 WSIs for 241 patients were used to train the algorithm. Of these cases, 219 WSIs (187 patients) were non-selected consecutive cases that had prompted IHC ordering. The remaining 80 WSIs (54 patients) were selected from the previous (2019) clinical workload of 54 patients and designated as control cases. Pathologists annotated WSIs on the AIDA system. Pathologists drew around the focus (or foci) of interest (using a free hand digital drawing tool) that had prompted IHC ordering, and selected the reason for ordering IHC (up to 8 foci per case). Figure summarises the collection and usage of WSI datasets. The control cases were reported as benign or malignant (50:50 split) and had had no IHC ordered at the time. We included these cases in the training/testing dataset in order to provide negative examples of benign and malignant cases for the algorithm. Algorithm development We sought to develop an algorithm that could recognise tissue that is deemed ambiguous by pathologists and thus the case cannot be signed out by H&E morphology alone. We divided the histology data into a concise categorisation reflecting the decision process carried out in the clinic. In day-to-day practice, an IHC order is triggered by the presence of tissue with ambiguous morphology. Malignant tissue with very poor differentiation will take the organisation of higher Gleason patterns (e.g. amorphous sheets, cribriform glands), while benign tissue may mimic low Gleason patterns . As clearly benign, clearly malignant tissues are easy to distinguish by the pathologist, we group these tissues together as “certain” tissue. Intermediate differentiation levels are instead labelled as “ambiguous”. We trained a binary deep neural network (DNN) classifier to distinguish ambiguous from certain tissue. This corresponds to recognising all cases that cause sufficient uncertainty in the diagnostic procedure to require further information on the tissue, in the form of an IHC stain. The idea is illustrated in Fig. . We performed threefold cross validation in order to test the algorithm. We created three training splits of 200 slides by randomly sampling with replacement the 299 slides of our training dataset. The remaining 99 slides outside of each split were assigned as the test set. The ratio of control over ordered slides was fixed at 0.4 in each split. Digital histology images can be corrupted by different types of artefacts. These include blur, debris and tissue folds, but also intensity irregularities in the backgrounds due to imprecision in the scanning process . We trained a separate DNN in order to segment tissue areas robustly. 1024 × 1024 tissue tiles at a resolution of 1.0 µm/px (10×) were extracted from annotated foci of interest within the tissue boundaries in the IHC-requested slides, and from benign and malignant regions in the control slides. An ensemble of three Residual Attention DNNs was trained on each data split. The network ensemble was used to estimate the uncertainty of prediction on each tile, following the method described in . The networks were trained for 200 epochs to convergence. Early stopping was not used. Instead, we relied on online space-domain and frequency-domain alterations of the training tiles, such as affine transforms and Gaussian noise, to augment the dataset and avoid overfitting. In order to evaluate the model, inference was performed on individual tiles on the foci, then the softmax class probabilities were averaged over all the tiles comprising the focus. The final label was assigned according to the class with the highest probability. The training procedure for the tile classifier is shown in Fig. . Because there is no clear-cut criterion to determine whether the tissue morphology is atypical enough for the H&E stain to be diagnostically insufficient alone, the assessment contains a degree of uncertainty. This subjective component in the IHC order decision can lead to differences in annotations between pathologists. The conflicting tile labels can result in overfitting of the model over tissue features upon which pathologists disagree. Besides the uncertainty in tile labels, a pathologist can decide to order IHC for a patient because of multiple interesting tissue features present in different locations of the slide. Hence, an approach that considers each tile in the WSI separately is not sufficient to perform an accurate IHC order decision for the patient. Thus, a second step of the algorithm was designed to decide whether to order IHC that takes into account the tile-level features and prediction uncertainties aggregated over the whole slide. First, the contents of all tiles in the slide were compressed into a feature vector. The DNN was applied to all tissue tiles in every H&E image. Figure shows examples of tile classification. Tile feature vectors were calculated from individual tiles through a similar approach to . Distribution statistics (median, mean, variance, and kurtosis) were computed for features with an “ambiguous” label, and for tiles with a “certain” label. Furthermore, the prediction probability variance and loss variance for the slide were computed. The decision model consisted of a random-boosted-trees classifier, which was then trained on the slide feature vectors to predict whether IHC should be ordered for the patient case for IHC staining. Feature vectors were computed for each slide following the process detailed above, and the decision model was applied to all slides. The procedure is shown in Fig. . Clinical validation A retrospective clinical validation of the algorithm on a fresh set of images was performed. For the validation, 100 new prostate biopsy cases were selected consecutively from the 2019 scanned slide archive. Cases that were not scanned were later excluded bringing the dataset to 91 cases. The total number of slides was 222. Cases were selected from early 2019 prior to the collection of annotated cases for training, in order to avoid dilution of the case mix with the removal of cases for training. In order to maximise the range of potential morphological appearances, while limiting the number of images requiring review, from each case one specimen/site was selected (specimen 1 of the case) and the H&E slides were used from one level of the tissue. Thus, one image was taken from each case and presented blindly to pathologists on AIDA. All three pathologists reviewed all images, thus generating three separate sets of validation annotations. Pathologists annotated the image to note if IHC ordering would be prompted or not for that image in their opinion, in the same manner as was performed for the training annotation set. The algorithm was then applied to all slides. The ResNet ensemble was used to classify all tissue tiles of the WSI. Features were computed as explained in the previous section and used to predict which cases need IHC requesting in the validation set. The algorithm decisions were then compared to pathologists’ decisions. The study was undertaken in a large academic teaching hospital in the UK (tertiary referral centre) with specialist urological pathology reporting and which processes 750–1100 prostate biopsies per year. The cellular pathology laboratory achieved the milestone of scanning 100% of the surgical histology workload in September 2020 with pathologists validated for full DP reporting. Three specialist uropathologists (authors CV, LB, RTC) were involved in the development of the tool, two with greater than 10 years post-specialist registration experience, and one with 2 years post-specialist registration experience. In order to understand baseline rates of IHC request and potential workflow implications of the tool, all prostate biopsy cases were audited over a 12-month period, from August 2018 (before the introduction of DP) to August 2019. The audit collected data on the case types (transrectal ultrasound guided biopsies or systematic transperineal biopsies), number of biopsies, turnaround times, extra work ordering, IHC requests and final diagnosis. To capture actual pathologist reporting times with and without IHC and necessary delays due to IHC request, a prospective audit of consecutive prostate biopsy cases reported by three specialist urological pathologists (CV, LB, RTC) during the period September 2019 to March 2020 was undertaken. For all cases, the date the case was received, the date the case was reviewed as H&E alone (reporting session 1), the date IHC was requested, the date of further reporting sessions where the case was reviewed with IHC (reporting session 2) and the date the case was signed out were recorded. Using stop-watches the following times were recorded: (1) length of time for initial slide review (H&E only) and make notes, (2) time to organise and make IHC request (3) if IHC-requested, time to review case again with IHC, and (4) typing report. In order to model potential time savings with upfront IHC ordering, we compared the mean turnaround times (date received to sign out) and pathologist reporting times (time for pathologists to examine and write report) for both IHC-requested and non IHC-requested cases. We assumed IHC would be performed shortly after H&Es were ready (e.g. process started within 3 h) and not lead to any significant delays. Reporting time for IHC-requested cases was divided into two distinct sessions: during the first session, the pathologist examines the H&E slides and decides whether the case requires IHC. In the second session, the pathologist examines H&E and IHC slides together to make a diagnosis. The time savings that could be achieved by having IHC at the same time as H&E for reporting and thus removing duplication of effort is a complex one, including several factors in the decision-making process. We calculated this for our laboratory in two ways: firstly, we assumed that the reporting time for cases with advance IHC requesting would be shortened to the reporting time for non IHC-requested cases. The second estimation method consisted in modelling the different tasks and factors impinging on reporting time individually. We compared the two estimates to approximate the benefits in reporting time. The study was conducted under the Pathology image data Lake for Analytics, Knowledge and Education (PathLAKE) research ethics committee approval (reference 19/SC/0363) and Oxford Radcliffe Biobank research ethics committee approval (reference 19/SC/0173). We created a WSI prostate biopsy training and testing set for the proposed IHC requesting tool from routine diagnostic cases. Prostate biopsies in which IHC for basal cell markers was requested during the period September 2019 to March 2020 were identified prospectively for study inclusion. Biopsies where IHC was not ordered before sign-out were excluded. Biopsies that could not be or had not been digitised were excluded. All biopsies were reported by one of three specialist uropathologists at our tertiary centre, using a mix of primary DP reporting (Philips IntelliSite Pathologist Suite, Koninklijke Philips N.V, Amsterdam, Netherlands) and traditional light microscopy/glass slide reporting. All cases were digitally scanned on a Philips IntelliSite Ultra Fast Scanner (version 1.8.6614,20180906_R51, Koninklijke Philips N.V, Amsterdam, Netherlands). In order to understand the reasoning behind pathologist based IHC requests and identify categories to be modelled by machine learning, we devised a classification system of eight types of ambiguous prostate gland foci that would prompt IHC requests. These ‘reasons’ were based on the pathologists’ experience and were devised to include a representative range of the most common reasons for IHC ordering to confirm or exclude PCa. These are summarised in Table with examples in Fig. . All cases which had IHC requested by three reporting pathologists (CV, LB, RTC) as part of the diagnostic process during the period of the prospective study were included in training. WSIs were de-identified using the Philips De-ID tool [version 1.1.5, Philips Digital Pathology Solutions Document DP-174226] and imported for annotation onto our in-house annotation platform, Annotation of Image Data by Assignments (AIDA) . Each case was annotated by one of the pathologists only. All foci that prompted IHC ordering were included in the training dataset. In total, 299 WSIs for 241 patients were used to train the algorithm. Of these cases, 219 WSIs (187 patients) were non-selected consecutive cases that had prompted IHC ordering. The remaining 80 WSIs (54 patients) were selected from the previous (2019) clinical workload of 54 patients and designated as control cases. Pathologists annotated WSIs on the AIDA system. Pathologists drew around the focus (or foci) of interest (using a free hand digital drawing tool) that had prompted IHC ordering, and selected the reason for ordering IHC (up to 8 foci per case). Figure summarises the collection and usage of WSI datasets. The control cases were reported as benign or malignant (50:50 split) and had had no IHC ordered at the time. We included these cases in the training/testing dataset in order to provide negative examples of benign and malignant cases for the algorithm. We sought to develop an algorithm that could recognise tissue that is deemed ambiguous by pathologists and thus the case cannot be signed out by H&E morphology alone. We divided the histology data into a concise categorisation reflecting the decision process carried out in the clinic. In day-to-day practice, an IHC order is triggered by the presence of tissue with ambiguous morphology. Malignant tissue with very poor differentiation will take the organisation of higher Gleason patterns (e.g. amorphous sheets, cribriform glands), while benign tissue may mimic low Gleason patterns . As clearly benign, clearly malignant tissues are easy to distinguish by the pathologist, we group these tissues together as “certain” tissue. Intermediate differentiation levels are instead labelled as “ambiguous”. We trained a binary deep neural network (DNN) classifier to distinguish ambiguous from certain tissue. This corresponds to recognising all cases that cause sufficient uncertainty in the diagnostic procedure to require further information on the tissue, in the form of an IHC stain. The idea is illustrated in Fig. . We performed threefold cross validation in order to test the algorithm. We created three training splits of 200 slides by randomly sampling with replacement the 299 slides of our training dataset. The remaining 99 slides outside of each split were assigned as the test set. The ratio of control over ordered slides was fixed at 0.4 in each split. Digital histology images can be corrupted by different types of artefacts. These include blur, debris and tissue folds, but also intensity irregularities in the backgrounds due to imprecision in the scanning process . We trained a separate DNN in order to segment tissue areas robustly. 1024 × 1024 tissue tiles at a resolution of 1.0 µm/px (10×) were extracted from annotated foci of interest within the tissue boundaries in the IHC-requested slides, and from benign and malignant regions in the control slides. An ensemble of three Residual Attention DNNs was trained on each data split. The network ensemble was used to estimate the uncertainty of prediction on each tile, following the method described in . The networks were trained for 200 epochs to convergence. Early stopping was not used. Instead, we relied on online space-domain and frequency-domain alterations of the training tiles, such as affine transforms and Gaussian noise, to augment the dataset and avoid overfitting. In order to evaluate the model, inference was performed on individual tiles on the foci, then the softmax class probabilities were averaged over all the tiles comprising the focus. The final label was assigned according to the class with the highest probability. The training procedure for the tile classifier is shown in Fig. . Because there is no clear-cut criterion to determine whether the tissue morphology is atypical enough for the H&E stain to be diagnostically insufficient alone, the assessment contains a degree of uncertainty. This subjective component in the IHC order decision can lead to differences in annotations between pathologists. The conflicting tile labels can result in overfitting of the model over tissue features upon which pathologists disagree. Besides the uncertainty in tile labels, a pathologist can decide to order IHC for a patient because of multiple interesting tissue features present in different locations of the slide. Hence, an approach that considers each tile in the WSI separately is not sufficient to perform an accurate IHC order decision for the patient. Thus, a second step of the algorithm was designed to decide whether to order IHC that takes into account the tile-level features and prediction uncertainties aggregated over the whole slide. First, the contents of all tiles in the slide were compressed into a feature vector. The DNN was applied to all tissue tiles in every H&E image. Figure shows examples of tile classification. Tile feature vectors were calculated from individual tiles through a similar approach to . Distribution statistics (median, mean, variance, and kurtosis) were computed for features with an “ambiguous” label, and for tiles with a “certain” label. Furthermore, the prediction probability variance and loss variance for the slide were computed. The decision model consisted of a random-boosted-trees classifier, which was then trained on the slide feature vectors to predict whether IHC should be ordered for the patient case for IHC staining. Feature vectors were computed for each slide following the process detailed above, and the decision model was applied to all slides. The procedure is shown in Fig. . A retrospective clinical validation of the algorithm on a fresh set of images was performed. For the validation, 100 new prostate biopsy cases were selected consecutively from the 2019 scanned slide archive. Cases that were not scanned were later excluded bringing the dataset to 91 cases. The total number of slides was 222. Cases were selected from early 2019 prior to the collection of annotated cases for training, in order to avoid dilution of the case mix with the removal of cases for training. In order to maximise the range of potential morphological appearances, while limiting the number of images requiring review, from each case one specimen/site was selected (specimen 1 of the case) and the H&E slides were used from one level of the tissue. Thus, one image was taken from each case and presented blindly to pathologists on AIDA. All three pathologists reviewed all images, thus generating three separate sets of validation annotations. Pathologists annotated the image to note if IHC ordering would be prompted or not for that image in their opinion, in the same manner as was performed for the training annotation set. The algorithm was then applied to all slides. The ResNet ensemble was used to classify all tissue tiles of the WSI. Features were computed as explained in the previous section and used to predict which cases need IHC requesting in the validation set. The algorithm decisions were then compared to pathologists’ decisions. Turnaround and reporting times audits The results of the retrospective audit enabled a comparison of necessary time costs incurred when IHC is requested for PCa cases. The mean turnaround times for IHC-requested cases was 7 days and 10 h (95% CI: (7 days 2 h, 7 days 16 h), n = 380), while the mean turnaround time for non IHC-requested cases was 4 days and 5 h (95% CI: (4 days, 4 days 9 h), n = 576). This included all types of prostate biopsy and the case mix was similar across both cohorts, as both IHC-requested and non IHC-requested cases had a median of three blocks. This indicates that the potential time saving by the introduction of this tool in our laboratory was 3 days and 5 h on average (95% CI: (2 days 20 h, 3 days 12 h), n 1 = 380, n 2 = 576, Welch-Satterthwaite (WS) approximation ). Non IHC-requested cases took an average of 17.9 min to be diagnosed (95% CI: (16.7, 19 min), n = 128), while the reporting time for cases where IHC was requested averaged at 33.4 min (95% CI: (30.7, 36.2 min), n = 133) over the course of two or more reporting sessions. The average time difference was therefore 15.5 min. The time savings that could be achieved by having IHC at the same time as H&E for reporting and thus removing duplication of effort are influenced by several factors inherent to the slide review and diagnostic decision processes. We calculated this for our laboratory in two ways: Firstly assuming automatic IHC ordering would reduce the reporting time for IHC-requiring cases to the same time taken to diagnose cases with no IHC request, we estimate a time saving of at least 12.6 min per case, by taking the lower end of confidence interval for the difference between mean IHC and non IHC requested reporting times, and 15.6 min on average (95% CI: (12.6, 18.5 min), n 1 = 133, n 2 = 128, WS approximation). Secondly, we assume a workflow where H&E review occurs during reporting session 1, and reporting session 2 consists of review of the IHC together with re-review of highlights of the H&E. In a slide viewing session, the pathologist screens the slides/images, spends time viewing difficult areas in more detail and makes a decision either to order IHC or make a firm diagnosis. In session 1 more time is spent on difficult areas and a decision is made to order IHC. In reporting session 2, the pathologist re-reviews the H&E, focusing on the areas of difficulty, reviews the new slides (IHC) and makes a decision. Thus in an IHC-requested case, the duplication points are re-reviewing the H&E to refamiliarise with the case in session 2 and making a further set of decisions than if the case were reported once with all of the necessary slides (as a decision is made in both sessions 1 and 2) and making this decision takes time. We make an assumption that an additional step of decision-making in an IHC case takes 1.5 min. During re-reviewing of H&E, the previously marked foci of ambiguous tissue are examined. Similarly, during first reviewing of IHC, corresponding foci are looked at to confirm staining status. We therefore assumed that the time taken to re-review H&E and for reviewing IHC are approximately equal. Thus, 7.5 min are spent re-reviewing the H&E and 7.5 min are spent reviewing the IHC. From our audit, IHC requesting took an average of 1 min. We estimate therefore that 11 min can be saved by having IHC at the same time as H&E and the case only viewed once. This is likely to be a conservative estimate and does not take into account additional time picking up additional sets of slides from the lab, marking up additional slides etc. Annotation data 169 prospective cases were included in the study, from which 641 foci were identified across the three levels for each core that prompted IHC ordering. These foci were annotated for training the algorithm. Of these, Pathologist A annotated 32 foci, Pathologist B 284, and Pathologist C 325, which was in proportion to their clinical workloads. The breakdown of the reasons for ordering IHC and the final diagnoses are given Table , with examples in Fig. . The commonest reasons for IHC request were small foci of cancer needing confirmation (187 foci) and atypical foci that were probably benign (144 foci). Algorithm performance The reliability of tile-level foci classification is reported in Table . The ensemble attains excellent classification performance on the unseen test sets. Figure compares example output of the tile classifier on unseen validation data with pathologists’ annotations. The model outputs a “certain” label for 90% of the test tiles. Hence, the models learned that only small regions of the needle biopsy contain ambiguous tissue morphology. This reflects standard diagnostic practice carried out by pathologists, where the need to order IHC is decided from small portions of the needle biopsy. While pathologists only individuate foci of ambiguity on slides where an IHC stain will be requested, the algorithm finds at least some ambiguous tiles in each slide of the dataset, with only 4 out of the 99 slides of the test set containing no ambiguous tiles. Thus the large number of ambiguous foci is likely to be due to morphological characteristics of tissue that were not present in the training data due to the potential range, and thus have never been seen by the model, which will produce spurious classifications for at least some tiles in most images. Most images are large, with a mean number of tiles in each image for the test dataset of 209, which increases the chance of encountering such different tissue appearance. These results highlight the need for a decision-making step that is robust to the presence of tissue regions with an “ambiguous” labelling in the image. The second step of the algorithm was designed to make a slide-level decision based on morphology and solve this issue. The IHC-ordering decision step performs well on the test set, as detailed in Table (mean accuracy: 99%, mean AUC: 0.99). The decision algorithm also predicts pathologists’ IHC order requirements on the validation dataset. The algorithm predicts very well the need for IHC staining according to all pathologists. Table reports the agreement metrics for the IHC order decision between the model and each one of the three pathologists. In Fig. , the receiver operating characteristic (ROC) curves for the model predictions vs pathologists’ annotations are reported. As a result of the disagreements in IHC ordering between pathologists, the model matches more closely the IHC ordering of pathologist 1, and performs the poorest when compared against pathologist 3’s opinion. The average agreement with pathologists is 0.81, with an average AUC of 0.80. Algorithm analysis In order to understand what morphological details the model was sensitive to, we derived salience maps for foci from ordered slides and control slides with guided backpropagation (see Fig. ). The network examined the lumen structures inside prostate glands, the nuclei the gland is composed of, and the size of the epithelial cell bodies. Overall, these results highlight that the DNN is capable of recognising the salient features of epithelial structures. This is reflected in the feature vectors constructed from the DNN features and outputs from each slide, and used to train the slide-level IHC request classifier, whose projection onto the principal components is shown in Fig. . The vectors from the three datasets belong to the same point cloud in principal component space (Fig. ). Furthermore, the separation in feature space between IHC-requested and non IHC-requested slides, albeit imperfect, suggests the vectors are representative of ambiguous/certain morphological features. Workload and time savings impact We examine three potential operating points on the ROC curve, and the trade off between time savings and additional IHC requests that would be incurred if the tool was operated at those points. The points are marked on Fig. and correspond to a specificity of 0.6 (point 1), 0.75 (point 2), and 0.9 (point 3). Table reports the time savings and additional incurred costs. Out of 974 retrospectively audited cases, IHC was requested for 380 cases, or 40% of total cases. We used the minimum predicted time turnaround time saving of 3 days and 2 h and the minimum predicted reporting time saving of 11 min for the calculations, as discussed earlier. A higher specificity of the chosen operating point corresponded to larger savings in turnaround time and reporting time on one hand, and to larger extra costs due to overcalling on the other hand. Operating at a lower specificity yielded smaller predicted costs, but the predicted time savings were also reduced. The operating point with the highest specificity (0.9) provided a similar time saving to ordering IHC on all cases, but at half the cost of such reflex testing. Across 1000 sets of prostate biopsies needing IHC, we conservatively estimate the tool would save 165 pathologist hours. The results of the retrospective audit enabled a comparison of necessary time costs incurred when IHC is requested for PCa cases. The mean turnaround times for IHC-requested cases was 7 days and 10 h (95% CI: (7 days 2 h, 7 days 16 h), n = 380), while the mean turnaround time for non IHC-requested cases was 4 days and 5 h (95% CI: (4 days, 4 days 9 h), n = 576). This included all types of prostate biopsy and the case mix was similar across both cohorts, as both IHC-requested and non IHC-requested cases had a median of three blocks. This indicates that the potential time saving by the introduction of this tool in our laboratory was 3 days and 5 h on average (95% CI: (2 days 20 h, 3 days 12 h), n 1 = 380, n 2 = 576, Welch-Satterthwaite (WS) approximation ). Non IHC-requested cases took an average of 17.9 min to be diagnosed (95% CI: (16.7, 19 min), n = 128), while the reporting time for cases where IHC was requested averaged at 33.4 min (95% CI: (30.7, 36.2 min), n = 133) over the course of two or more reporting sessions. The average time difference was therefore 15.5 min. The time savings that could be achieved by having IHC at the same time as H&E for reporting and thus removing duplication of effort are influenced by several factors inherent to the slide review and diagnostic decision processes. We calculated this for our laboratory in two ways: Firstly assuming automatic IHC ordering would reduce the reporting time for IHC-requiring cases to the same time taken to diagnose cases with no IHC request, we estimate a time saving of at least 12.6 min per case, by taking the lower end of confidence interval for the difference between mean IHC and non IHC requested reporting times, and 15.6 min on average (95% CI: (12.6, 18.5 min), n 1 = 133, n 2 = 128, WS approximation). Secondly, we assume a workflow where H&E review occurs during reporting session 1, and reporting session 2 consists of review of the IHC together with re-review of highlights of the H&E. In a slide viewing session, the pathologist screens the slides/images, spends time viewing difficult areas in more detail and makes a decision either to order IHC or make a firm diagnosis. In session 1 more time is spent on difficult areas and a decision is made to order IHC. In reporting session 2, the pathologist re-reviews the H&E, focusing on the areas of difficulty, reviews the new slides (IHC) and makes a decision. Thus in an IHC-requested case, the duplication points are re-reviewing the H&E to refamiliarise with the case in session 2 and making a further set of decisions than if the case were reported once with all of the necessary slides (as a decision is made in both sessions 1 and 2) and making this decision takes time. We make an assumption that an additional step of decision-making in an IHC case takes 1.5 min. During re-reviewing of H&E, the previously marked foci of ambiguous tissue are examined. Similarly, during first reviewing of IHC, corresponding foci are looked at to confirm staining status. We therefore assumed that the time taken to re-review H&E and for reviewing IHC are approximately equal. Thus, 7.5 min are spent re-reviewing the H&E and 7.5 min are spent reviewing the IHC. From our audit, IHC requesting took an average of 1 min. We estimate therefore that 11 min can be saved by having IHC at the same time as H&E and the case only viewed once. This is likely to be a conservative estimate and does not take into account additional time picking up additional sets of slides from the lab, marking up additional slides etc. 169 prospective cases were included in the study, from which 641 foci were identified across the three levels for each core that prompted IHC ordering. These foci were annotated for training the algorithm. Of these, Pathologist A annotated 32 foci, Pathologist B 284, and Pathologist C 325, which was in proportion to their clinical workloads. The breakdown of the reasons for ordering IHC and the final diagnoses are given Table , with examples in Fig. . The commonest reasons for IHC request were small foci of cancer needing confirmation (187 foci) and atypical foci that were probably benign (144 foci). The reliability of tile-level foci classification is reported in Table . The ensemble attains excellent classification performance on the unseen test sets. Figure compares example output of the tile classifier on unseen validation data with pathologists’ annotations. The model outputs a “certain” label for 90% of the test tiles. Hence, the models learned that only small regions of the needle biopsy contain ambiguous tissue morphology. This reflects standard diagnostic practice carried out by pathologists, where the need to order IHC is decided from small portions of the needle biopsy. While pathologists only individuate foci of ambiguity on slides where an IHC stain will be requested, the algorithm finds at least some ambiguous tiles in each slide of the dataset, with only 4 out of the 99 slides of the test set containing no ambiguous tiles. Thus the large number of ambiguous foci is likely to be due to morphological characteristics of tissue that were not present in the training data due to the potential range, and thus have never been seen by the model, which will produce spurious classifications for at least some tiles in most images. Most images are large, with a mean number of tiles in each image for the test dataset of 209, which increases the chance of encountering such different tissue appearance. These results highlight the need for a decision-making step that is robust to the presence of tissue regions with an “ambiguous” labelling in the image. The second step of the algorithm was designed to make a slide-level decision based on morphology and solve this issue. The IHC-ordering decision step performs well on the test set, as detailed in Table (mean accuracy: 99%, mean AUC: 0.99). The decision algorithm also predicts pathologists’ IHC order requirements on the validation dataset. The algorithm predicts very well the need for IHC staining according to all pathologists. Table reports the agreement metrics for the IHC order decision between the model and each one of the three pathologists. In Fig. , the receiver operating characteristic (ROC) curves for the model predictions vs pathologists’ annotations are reported. As a result of the disagreements in IHC ordering between pathologists, the model matches more closely the IHC ordering of pathologist 1, and performs the poorest when compared against pathologist 3’s opinion. The average agreement with pathologists is 0.81, with an average AUC of 0.80. In order to understand what morphological details the model was sensitive to, we derived salience maps for foci from ordered slides and control slides with guided backpropagation (see Fig. ). The network examined the lumen structures inside prostate glands, the nuclei the gland is composed of, and the size of the epithelial cell bodies. Overall, these results highlight that the DNN is capable of recognising the salient features of epithelial structures. This is reflected in the feature vectors constructed from the DNN features and outputs from each slide, and used to train the slide-level IHC request classifier, whose projection onto the principal components is shown in Fig. . The vectors from the three datasets belong to the same point cloud in principal component space (Fig. ). Furthermore, the separation in feature space between IHC-requested and non IHC-requested slides, albeit imperfect, suggests the vectors are representative of ambiguous/certain morphological features. We examine three potential operating points on the ROC curve, and the trade off between time savings and additional IHC requests that would be incurred if the tool was operated at those points. The points are marked on Fig. and correspond to a specificity of 0.6 (point 1), 0.75 (point 2), and 0.9 (point 3). Table reports the time savings and additional incurred costs. Out of 974 retrospectively audited cases, IHC was requested for 380 cases, or 40% of total cases. We used the minimum predicted time turnaround time saving of 3 days and 2 h and the minimum predicted reporting time saving of 11 min for the calculations, as discussed earlier. A higher specificity of the chosen operating point corresponded to larger savings in turnaround time and reporting time on one hand, and to larger extra costs due to overcalling on the other hand. Operating at a lower specificity yielded smaller predicted costs, but the predicted time savings were also reduced. The operating point with the highest specificity (0.9) provided a similar time saving to ordering IHC on all cases, but at half the cost of such reflex testing. Across 1000 sets of prostate biopsies needing IHC, we conservatively estimate the tool would save 165 pathologist hours. In this study, we evaluate the potential implications of automating the pre-requesting by AI of IHC in prostate biopsy cases that contain ill-defined epithelial morphology. Benign tissue and malignancies of the prostate present a large variety of morphological patterns, which pathologists must recognise by fitting the features to categories of recognised visual features and identifying distinctive characteristics of malignancy. Regions of tissue containing ambiguous morphology constitute a challenge to uropathologists, as a diagnosis cannot be made without IHC and even with IHC, the tissue may remain in an ambiguous category such as ASAP. We developed a novel composition of a DL system to detect ambiguous versus certain morphology in needle core biopsies and a boosted random forest to predict which cases require IHC using the tissue content representations encoded by the DL system. The DL system was trained on pathologists’ annotations of ambiguous tissue regions, and it successfully recognised image foci contributing to diagnostic uncertainty on H&E WSIs. The gradient boosted trees classifier was used to mimic the decision-making process of pathologists and predict which slides require IHC. This second step was needed because most cases have at least some ambiguous areas flagged reflecting the variety of morphological appearances that can be seen in the prostate. The slide-based classifier is needed to translate information about ambiguous tissue content in the whole slide into a decision about which cases are sufficiently ambiguous to require IHC. The IHC-request decision step correctly predicted the IHC request decisions on the test set obtained through threefold cross validation. The algorithm was also validated on the independent validation dataset, where it satisfactorily matched the IHC request decisions of three different pathologists. The good classification results obtained on the validation dataset point to good generalisation properties of the network. The agreement rate between pathologists in the validation set was between 61 and 64%. This is consistent with reports of interobserver variability in other diagnostic tasks with a subjective component, such as Gleason reporting (consistently reported to be around 60% ). In order to simulate over and under-ordering scenarios in a potential real-life deployment of the tool, we set three arbitrary points on the ROC at specificity levels of 60%, 75%, and 90%, resulting in false positive rates (over-ordering of IHC) in 15%, 33%, and 48% of slides respectively. Taking the most conservative level of specificity, 40% of cases that needed IHC might be missed by the tool and it might over order on 15% of slides. We took the approach to deliberately target diagnostically ambiguous cases rather than basing the tool on a tumour detection tool to better reflect the inherent complexity of the decision to request IHC. Although a tool based on tumour detection could be set to identify small foci of morphologically convincing tumour or longer lengths of unusual but certain cancers (reasons 1 and 4) and request IHC for confirmation, these scenarios accounted for 36% of cases which was far fewer than the ambiguous areas where the diagnosis was not certain which accounted for 59% cases (reasons 2, 3, 5, 6, 8). Some of these cases may fall through the net in a tumour-finding approach being classified as benign with no IHC required. Although most areas had a gland-based morphology, reminiscent of Gleason patterns 3 and 4, rather than diffuse sheets of cells which may represent pattern 5, 5% of the annotated foci fell into this latter category. Regardless, the tool performed well when applied onto either pattern. The training also lacked other non-adenocarcinoma diagnoses of the prostate, such as urothelial carcinoma, potential neuroendocrine carcinoma or soft tissue lesions due to absent or infrequent training examples, which may be addressed in future iterations of the tool. Requesting IHC involves additional tissue staining and allocating extra pathologist time for case re-examination, which increases organisational complexity. In this study we showed that prostate biopsies requiring IHC take double the time to be reported by pathologists, and twice as many days to be reported. Some of this is process related, but also it is recognised that cases requiring IHC are inherently more complex. A few characteristics of the IHC workflow contributed significantly to higher time costs. Firstly, we found that the time between case reception and the IHC request date is an average of 2 days, which is redundant time whereby cases are awaiting review. Our tool moves the decision to the point at which the H&E is created in the lab. The time to perform the IHC and the inherent complex nature of these cases does not change with this tool. Rather, we have shown that 11 min per case can potentially be saved by advance IHC ordering, which we believe to be a conservative estimate. This is achieved by reducing the inefficiencies of two reporting sessions in re-reviewing slides and duplicating decision-making, i.e. the time-consuming decision of ordering IHC before the final diagnostic decision can be made. Processing the IHC slides in a contiguous time slot to the H&E slides provides a leaner workflow for the lab as the case is only booked out to the pathologist once. Our approach involves targeted advance IHC requesting: request on every case should not be adopted in practice because the costs involved in staining extra tissue for every case outweighs the benefits . Like all similar tests there is inevitably a trade off in this model, with some degree of over or under calling. We envisage that the tool would be used adaptively and that centres would be able to select an acceptable threshold for ordering IHC based on institutional preference. We would need to explore with regulatory bodies how to achieve the setting of different performance points within the appropriate regulatory framework. This would likely involve submitting the validation data for a number of set thresholds to support the intended purpose. Laboratories would then be required to verify performance to the satisfaction of their governance team and external laboratory accreditation bodies. The roadmap to introduction of AI into cellular pathology is complex, which has thus far limited uptake . In this study, we describe a proof-of-concept algorithm for IHC request by AI that effectively takes a workflow step away from the pathologist but does not directly affect the diagnosis. In particular in the case of under-calling by the tool, i.e. a missed IHC request, IHC can nevertheless be requested by the pathologist after visual assessment of the H&E slide. Hence this is a relatively low risk task, which might serve as a good entry point for the use of AI. There is an inherent component of subjectivity in the IHC requesting task, and thus a ground truth is difficult to define. A tool trained by a group of pathologists from one centre might not entirely represent practice in another centre and the decision to request may be affected by a number of other non-morphological factors—level of fatigue, pathologist and institutional experience, degree of specialisation, psychological and personality factors. The next iteration of the tool will need careful design to capture multi-centre training and validation and establishment of the ground truth by a panel of pathologists. For this we will leverage our fortunate position within the PathLAKE DP consortium (one of the UK Government’s AI Centres of Excellence) . In outlining likely workflow benefits and economic impacts, we acknowledge the limitations of our dataset and that the workflow we describe here may be slightly different in other institutions. A fully developed tool will ultimately require prospective validation in a real-time health care setting, with a wider evaluation of what time savings are deliverable in practice. One aspect of practice that could be considered as a weakness of this AI tool, is that in some difficult cases the thinking/reflecting time afforded by waiting for IHC is helpful. Of course, the tool does not stop the pathologist from walking away from a case for a while to get perspective, if clinically appropriate. In the many cases where the IHC is for confirmation of what is already a confident diagnosis on H&E this should not be an issue. In the future, this work could be expanded for application to other prostate settings (such as transurethral resections). There is also potential to apply the tool to other tissue types (e.g. breast or lung biopsies), or to develop a generic tool for automated IHC or molecular requests. In summary, we designed and evaluated a tool for advance IHC requesting with the potential to reduce diagnostic times for PCa in the clinic. Our algorithm emulates the decision-making of pathologists and robustly estimates when IHC staining is required to diagnose a prostate biopsy. Unlike previous work that focuses on predicting the presence of cancer, we focus on automating a routine clinical task at the core of accurate PCa diagnosis. We believe tools that help pathologists carry out their daily tasks and improve clinical workflow will provide significant benefits to healthcare institutions and expedite the adoption of DP in cancer clinics worldwide. |
Immediate versus expedient emergent laparotomy in unstable isolated abdominal trauma patients | f1b8ff2b-6e86-4bc5-99dc-a1492f7f2263 | 11785445 | Surgical Procedures, Operative[mh] | Intra-abdominal trauma is a common cause of injury, responsible for a significant percentage of all trauma casualties in the US. Blunt injury, caused by motor vehicle crashes or falls, comprises 80% of all abdominal injuries. Penetrating trauma resulting in injury, caused by firearms or stabs, comprises 20% of all abdominal injuries. There is wide consensus among trauma surgeons that abdominal trauma patients who are haemodynamically unstable on admission should undergo immediate operative treatment. In addition, in most cases, unstable abdominal trauma patients are operated upon without any imaging because the time from admission to operative treatment is considered to have a critical effect on clinical outcomes and survival. Nonoperative treatment or laparoscopic surgery are not considered a component of management when patients are unstable, and are recommended only when treating haemodynamically stable trauma patients. , In specific circumstances, depending on physiological parameters, the surgical management of unstable abdominal trauma patients consists of damage control laparotomy. Consensus regarding the need for urgent operative treatment in unstable trauma patients is based, in part, on the “golden hour” concept that was described during the 1970s, where definitive haemorrhage control is achieved within 1h of the index injury. Management according to this concept was reported to be associated with reduced mortality. – Although this notion has been accepted worldwide, few clinical studies have investigated whether shortening the time span from admission to definitive treatment does, indeed, reduce mortality. , This study aims to investigate the influence of immediate versus expedient laparotomy in unstable abdominal trauma patients on survival and clinical outcomes. In addition, we aimed to assess whether differences in these aspects exist between blunt and penetrating abdominal injuries.
Study design and data collection This is a retrospective study that includes haemodynamically unstable trauma patients with an isolated abdominal injury, hospitalised and treated in Israel between 2000 and 2018. Haemodynamic instability was defined as systolic blood pressure (SBP) less than 90mmHg on admission. Only patients with isolated abdominal injury or with concomitant injuries, having Abbreviated Injury Scale (AIS) score of 1–2 were included. Only patients with concomitant injuries with AIS 1–2 were included to focus on patients with isolated or near-isolated abdominal injuries. AIS 1–2 was chosen because such injuries are considered minor and should have little to no impact on management priorities. Patients who arrived at the emergency department (ER) without signs of life or patients with an SBP higher than 90mmHg were excluded from the study. All data were collected via The Israeli National Trauma Registry (INTR) maintained by Israel’s National Center for Trauma and Emergency Medicine Research, as part of the Gertner Institute for Epidemiology and Health Policy and Research. The INTR records information regarding trauma patients hospitalised in 19 different medical centres across Israel, of which 6 are Level I trauma centres and 13 are Level II trauma centres. All trauma patients with an The International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM) diagnosis code between 800 and 959.9 are registered, including patients who died in the ER or patients who were transferred to a different hospital. This registry records data for more than 90% of all trauma patients and 98% of all severe trauma patients in Israel. The INTR does not collect data regarding patients who died at the scene or en route to the hospital and patients who were discharged from the ER and were not hospitalised. Data regarding patient’s age and gender, mechanism of injury, vital signs on admission, Injury Severity Score (ISS), Glasgow Coma Scale (GCS), time to surgery, surgical procedure, length of hospitalisation in an intensive care unit (ICU), total length of hospital stay (LOS) and survival were collected for each patient included in this study. The primary endpoint of the study was mortality and the secondary endpoints were ICU stay, duration of ICU stay and total LOS. Definitions Immediate laparotomy was defined as laparotomy within 60min of hospital admission. This definition was based on the “golden hour” concept of achieving definitive haemorrhage control within 1h of admission. Expedient laparotomy was defined as laparotomy within 60–120min of admission. This time frame for the expedient laparotomy group was chosen to ensure that all patients included in this study did undergo an “emergency laparotomy”. The decision to compare patients who were operated upon within 1h and those operated upon later was based on the “golden hour” concept. – Data analysis Analysis of all data was performed separately for patients with blunt and penetrating injuries. A comparison between the immediate laparotomy group and the expedient laparotomy group was performed regarding clinical outcomes, including length of hospitalisation in an ICU, total LOS and mortality. In addition, stratification of clinical outcomes was performed for ISS and GCS within each group. Statistical analysis Comparison between the study’s groups was performed for baseline and studied variables. Continuous variables were compared using the Mann–Whitney U test. Categorical variables were compared by either the chi-squared test or Fisher’s exact test. Statistical significance was considered as a two-tailed p -value of ≤0.05. All analyses were performed using SAS software, version 9.4 (SAS Institute, Cary, NC, USA). Ethical approval This study was approved by the Institutional Review Board of the Sheba Medical Center (Approval No. SMC-18-5138).
This is a retrospective study that includes haemodynamically unstable trauma patients with an isolated abdominal injury, hospitalised and treated in Israel between 2000 and 2018. Haemodynamic instability was defined as systolic blood pressure (SBP) less than 90mmHg on admission. Only patients with isolated abdominal injury or with concomitant injuries, having Abbreviated Injury Scale (AIS) score of 1–2 were included. Only patients with concomitant injuries with AIS 1–2 were included to focus on patients with isolated or near-isolated abdominal injuries. AIS 1–2 was chosen because such injuries are considered minor and should have little to no impact on management priorities. Patients who arrived at the emergency department (ER) without signs of life or patients with an SBP higher than 90mmHg were excluded from the study. All data were collected via The Israeli National Trauma Registry (INTR) maintained by Israel’s National Center for Trauma and Emergency Medicine Research, as part of the Gertner Institute for Epidemiology and Health Policy and Research. The INTR records information regarding trauma patients hospitalised in 19 different medical centres across Israel, of which 6 are Level I trauma centres and 13 are Level II trauma centres. All trauma patients with an The International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM) diagnosis code between 800 and 959.9 are registered, including patients who died in the ER or patients who were transferred to a different hospital. This registry records data for more than 90% of all trauma patients and 98% of all severe trauma patients in Israel. The INTR does not collect data regarding patients who died at the scene or en route to the hospital and patients who were discharged from the ER and were not hospitalised. Data regarding patient’s age and gender, mechanism of injury, vital signs on admission, Injury Severity Score (ISS), Glasgow Coma Scale (GCS), time to surgery, surgical procedure, length of hospitalisation in an intensive care unit (ICU), total length of hospital stay (LOS) and survival were collected for each patient included in this study. The primary endpoint of the study was mortality and the secondary endpoints were ICU stay, duration of ICU stay and total LOS.
Immediate laparotomy was defined as laparotomy within 60min of hospital admission. This definition was based on the “golden hour” concept of achieving definitive haemorrhage control within 1h of admission. Expedient laparotomy was defined as laparotomy within 60–120min of admission. This time frame for the expedient laparotomy group was chosen to ensure that all patients included in this study did undergo an “emergency laparotomy”. The decision to compare patients who were operated upon within 1h and those operated upon later was based on the “golden hour” concept. –
Analysis of all data was performed separately for patients with blunt and penetrating injuries. A comparison between the immediate laparotomy group and the expedient laparotomy group was performed regarding clinical outcomes, including length of hospitalisation in an ICU, total LOS and mortality. In addition, stratification of clinical outcomes was performed for ISS and GCS within each group.
Comparison between the study’s groups was performed for baseline and studied variables. Continuous variables were compared using the Mann–Whitney U test. Categorical variables were compared by either the chi-squared test or Fisher’s exact test. Statistical significance was considered as a two-tailed p -value of ≤0.05. All analyses were performed using SAS software, version 9.4 (SAS Institute, Cary, NC, USA).
This study was approved by the Institutional Review Board of the Sheba Medical Center (Approval No. SMC-18-5138).
Between 1 January 2000 and 31 December 2018, there were 196 haemodynamically unstable patients with an isolated abdominal injury, who underwent laparotomy within 120min from arrival at the hospital. Of these, 127 (64.8%) patients suffered from penetrating abdominal injury and 69 (35.2%) suffered from blunt abdominal injury. Penetrating abdominal injury We identified 127 trauma patients admitted with isolated penetrating abdominal injuries and SBP <90mmHg on admission. Of these, 119 patients (93.7%) were male and 8 (6.3%) were female. Mean age was 31years. Overall, the mean length of stay in an ICU was 6.7 (±11) days, the mean total LOS was 12 days (±19) and total mortality was 31.5% (40 patients). In this group, 109 (85.8%) patients underwent immediate laparotomy within 60min of arrival at the hospital (Group A), and 18 (14.2%) patients underwent expedient laparotomy within 60–120min (Group B). A comparison of patients’ characteristics and clinical outcomes is presented in . No differences regarding gender, age, GCS on admission and ISS were found between patients in the immediate and expedient laparotomy groups. Mean (±SD) LOS in patients who underwent immediate laparotomy and those who underwent expedient laparotomy was 5.9 (±10) and 12 (±3) days, respectively ( p = 0.008). No other differences in clinical outcomes between patients in groups A and B were found. Stratification of clinical outcomes for patients with penetrating injury Results for stratification of clinical outcomes, including ICU stay, LOS and mortality, according to ISS and GCS are presented in . In penetrating abdominal injury patients with ISS ≤14, no differences were found with regard to length of stay in an ICU, total LOS, or mortality between patients who underwent immediate laparotomy and those who underwent expedient laparotomy. In patients with an ISS ≥16, length of stay in an ICU for patients in the immediate laparotomy group, and patients in the expedient laparotomy was 6.8 (±11) and 15 (±16) days, respectively ( p = 0.04). In penetrating abdominal injury patients with GCS <15, no differences in clinical outcomes were found between patients in the immediate and expedient laparotomy groups. Blunt abdominal injury We identified 69 patients with an isolated blunt abdominal injury and SBP <90mmHg on admission. Of these, 51 (73.9%) were male and 18 (26.1%) were female. Mean age was 35years. Overall, the mean length of stay in an ICU was 6.9 (±9.4) days, the mean LOS was 8.1 days (±9.9) and mortality was 34.8% (24 patients). In this group, 40 (58.0%) patients underwent immediate laparotomy (Group C) and 29 (42.0%) patients underwent expedient laparotomy (Group D). A comparison of patients’ demographics, GCS, ISS and clinical outcomes is presented in . The rate of GCS <15 in the immediate laparotomy group and expedient laparotomy group was 60.0% and 21.0%, respectively ( p = 0.001). No other significant differences regarding gender, age and ISS were found between the immediate and expedient laparotomy groups. Total LOS was 7 (±9.2) days in the immediate laparotomy group and 9.6 (±11) in the expedient laparotomy group ( p = 0.022). The mortality rate was 50.0% and 13.8% in the immediate and expedient laparotomy groups, respectively ( p = 0.002). Stratification of clinical outcomes for patients with blunt injury Results for stratification of clinical outcomes, including ICU stay, LOS and mortality, according to ISS and GCS are presented in . In blunt abdominal injury patients with an ISS ≤14, no differences were found with regard to length of stay in an ICU unit, total LOS or mortality between patients in the immediate and expedient laparotomy groups. In patients with an ISS ≥16, no differences were found with regard to length of stay in an ICU. Total LOS in patients with an ISS ≥16 who underwent immediate laparotomy, and patients who underwent expedient laparotomy, was 6.2 (±9.5) and 10 (±12) days, respectively ( p = 0.01). The mortality rates of patients in the immediate and expedient laparotomy groups were 60.6% and 20.0%, respectively ( p = 0.004). In blunt abdominal injury patients with GCS <15, mortality rates were 79.2% and 33.3% in the immediate and expedient laparotomy groups, respectively ( p = 0.049).
We identified 127 trauma patients admitted with isolated penetrating abdominal injuries and SBP <90mmHg on admission. Of these, 119 patients (93.7%) were male and 8 (6.3%) were female. Mean age was 31years. Overall, the mean length of stay in an ICU was 6.7 (±11) days, the mean total LOS was 12 days (±19) and total mortality was 31.5% (40 patients). In this group, 109 (85.8%) patients underwent immediate laparotomy within 60min of arrival at the hospital (Group A), and 18 (14.2%) patients underwent expedient laparotomy within 60–120min (Group B). A comparison of patients’ characteristics and clinical outcomes is presented in . No differences regarding gender, age, GCS on admission and ISS were found between patients in the immediate and expedient laparotomy groups. Mean (±SD) LOS in patients who underwent immediate laparotomy and those who underwent expedient laparotomy was 5.9 (±10) and 12 (±3) days, respectively ( p = 0.008). No other differences in clinical outcomes between patients in groups A and B were found.
Results for stratification of clinical outcomes, including ICU stay, LOS and mortality, according to ISS and GCS are presented in . In penetrating abdominal injury patients with ISS ≤14, no differences were found with regard to length of stay in an ICU, total LOS, or mortality between patients who underwent immediate laparotomy and those who underwent expedient laparotomy. In patients with an ISS ≥16, length of stay in an ICU for patients in the immediate laparotomy group, and patients in the expedient laparotomy was 6.8 (±11) and 15 (±16) days, respectively ( p = 0.04). In penetrating abdominal injury patients with GCS <15, no differences in clinical outcomes were found between patients in the immediate and expedient laparotomy groups.
We identified 69 patients with an isolated blunt abdominal injury and SBP <90mmHg on admission. Of these, 51 (73.9%) were male and 18 (26.1%) were female. Mean age was 35years. Overall, the mean length of stay in an ICU was 6.9 (±9.4) days, the mean LOS was 8.1 days (±9.9) and mortality was 34.8% (24 patients). In this group, 40 (58.0%) patients underwent immediate laparotomy (Group C) and 29 (42.0%) patients underwent expedient laparotomy (Group D). A comparison of patients’ demographics, GCS, ISS and clinical outcomes is presented in . The rate of GCS <15 in the immediate laparotomy group and expedient laparotomy group was 60.0% and 21.0%, respectively ( p = 0.001). No other significant differences regarding gender, age and ISS were found between the immediate and expedient laparotomy groups. Total LOS was 7 (±9.2) days in the immediate laparotomy group and 9.6 (±11) in the expedient laparotomy group ( p = 0.022). The mortality rate was 50.0% and 13.8% in the immediate and expedient laparotomy groups, respectively ( p = 0.002).
Results for stratification of clinical outcomes, including ICU stay, LOS and mortality, according to ISS and GCS are presented in . In blunt abdominal injury patients with an ISS ≤14, no differences were found with regard to length of stay in an ICU unit, total LOS or mortality between patients in the immediate and expedient laparotomy groups. In patients with an ISS ≥16, no differences were found with regard to length of stay in an ICU. Total LOS in patients with an ISS ≥16 who underwent immediate laparotomy, and patients who underwent expedient laparotomy, was 6.2 (±9.5) and 10 (±12) days, respectively ( p = 0.01). The mortality rates of patients in the immediate and expedient laparotomy groups were 60.6% and 20.0%, respectively ( p = 0.004). In blunt abdominal injury patients with GCS <15, mortality rates were 79.2% and 33.3% in the immediate and expedient laparotomy groups, respectively ( p = 0.049).
Abdominal trauma continues to represent a major treatment challenge in trauma centres around the world and has remained associated with significant morbidity and mortality. In this study, we aimed to evaluate the association between the timing of laparotomy and clinical outcomes in haemodynamically unstable penetrating and blunt abdominal trauma patients. Without any doubt, these patients require urgent surgery. This statement is based on evidence that early bleeding control decreases the incidence of the physiological lethal triad, blood transfusion requirement and risk of hypoxic brain injury, and therefore reduces postoperative morbidity and mortality. A recent study by Smith et al has demonstrated that in patients managed with damage control resuscitation and damage control surgery, the appearance of the “death triad” predicted mortality only in 16.6% of patients. Thus, the authors concluded that other factors might contribute to in-hospital mortality. One such factor may be time to surgery. The real impact of “time to surgery” on patients’ outcomes remains unclear and a debate regarding the true association between time to surgery and clinical outcome still exists. In a study published in 2002, Clarke et al reported that a shorter time span from admission to laparotomy was associated with reduced mortality in patients undergoing surgery within 90min of admission. However, in a more recent study, published in 2020, Okada et al demonstrated no association between a shorter time to surgery and reduced mortality rates in unstable abdominal injury trauma patients. Moreover, when the timing of surgery is discussed, it is not clear what the term “immediate” means. In the Merriam-Webster dictionary, the term “immediate” is defined as “occurring without loss or interval of time”, whereas expedient is defined as “suitable for achieving a particular end in a given circumstance”. Owing to obvious ethical limitations, there is a lack of prospective controlled studies on this topic and many questions remain unanswered. No acceptable uniform definition for haemodynamic instability exists. Most decision-making processes are based solely on clinical judgement, which varies worldwide. Although the “golden hour” concept is applied by most trauma systems, there are studies that do not necessarily support the current recommendations. For example, a study on 3,656 trauma patients has investigated this approach and has shown no association between the transport time to an ER and mortality among injured patients with physiological abnormality in the field. Other studies evaluating the impact of transport times in trauma patients demonstrate no influence of longer evacuation time on mortality. , This may be specifically true in a small country, such as Israel, where transport times from the scene of injury to the designated trauma centre, are significantly shorter than in some US areas. The possible benefit seen in this study may be because achievement of vascular access and the start of resuscitation were performed more cohesively in patients who arrived later at the operating theatre. Another possible benefit of expedient laparotomy is the option to utilise endovascular and hybrid methods such as embolisation, endografts and resuscitative endovascular occlusion of the aorta. For example, insertion of an aortic balloon before surgery may enable inflation of the balloon intraoperatively and help prevent haemodynamic deterioration during laparotomy. The possible benefit of emergent laparotomy may be supported by a critique published by Lerner et al , which presented detailed support for the “golden hour” concept and did not identify any studies to support this concept. Conversely, is a delay in surgery justifiable? A previous study demonstrated that time to surgery, up to 2h following hospital admission, had no impact on outcomes in stable trauma patients with penetrating abdominal injury. The current study focuses on initially unstable patients with an abdominal injury and assessed the association between time to surgery and clinical outcome. In patients with penetrating abdominal injury, even after stratification for ISS, no significant differences in clinical outcomes were found between early and late surgery, including length of ICU stay, total LOS and mortality. The only difference in clinical outcome measure was a longer ICU stay in patients with ISS >16 who underwent expedient laparotomy. In patients with blunt abdominal injury, mortality was higher among patients who were operated upon within 60min from admission. Following stratification for ISS and GCS, in patients with ISS ≥16 as well as in patients with GCS <15, mortality remained higher for patients with blunt injury who were operated upon early. To the best of our knowledge, this is the first study that examines the impact of time to surgery on both blunt and penetrating injury in unstable patients with isolated abdominal injuries. A possible explanation for this result might be that expedient laparotomy may enable involvement of a senior trauma/vascular/anaesthesia team, better preoperative preparations, including blood bank readiness, better preoperative resuscitation, utilisation of different endovascular and hybrid treatment methods, and/or a preoperative computerised tomography scan that enables better understanding of the injury prior to surgery. The results of the current study suggest time may not be the sole and most important factor when treating trauma patients. These results are preliminary but may justify additional investigations. Further studies are needed to evaluate what is more important: time to scalpel only or time to resuscitation en route to scalpel. A large database with a computed real-time recording of physiological parameters and interventions may identify possible patients that could benefit from both approaches. We believe our findings suggest a need for future prospective studies regarding the optimal use of time before laparotomy in unstable abdominal trauma patients. Limitations The present study has several limitations. First, despite the use of a national database, this is a retrospective study with a relatively small sample size. Another limitation is that the trauma registry does not include data regarding specific clinical examination findings, as well as a clear description of clinical judgement and criteria for emergency surgery, response to fluids and resuscitation, haemodynamic instability, indications for surgery, reasons for decreased GCS score and reasons for surgery delay. Thus, patients who were operated within 60min may have been more severely injured in a manner that is not represented by data recorded in the registry. In addition, the potential differences in resuscitation protocols between medical centres in this study must be noted. For these reasons, additional prospective studies on this topic are warranted.
The present study has several limitations. First, despite the use of a national database, this is a retrospective study with a relatively small sample size. Another limitation is that the trauma registry does not include data regarding specific clinical examination findings, as well as a clear description of clinical judgement and criteria for emergency surgery, response to fluids and resuscitation, haemodynamic instability, indications for surgery, reasons for decreased GCS score and reasons for surgery delay. Thus, patients who were operated within 60min may have been more severely injured in a manner that is not represented by data recorded in the registry. In addition, the potential differences in resuscitation protocols between medical centres in this study must be noted. For these reasons, additional prospective studies on this topic are warranted.
For patients with penetrating injury, no differences in mortality between immediate and expedient laparotomy were found. For patients with blunt injury overall, as well as for more severely injured patients with a high ISS and a low GCS, mortality was significantly higher among patients who were operated upon immediately compared with those with similar injury severity who were operated upon expediently. Although the time to definitive treatment is an important factor when treating unstable patients, this study carefully suggests that an optimal use of time before surgical treatment may be beneficial and have a potentially positive impact on outcome. Certainly, further studies are needed to reinforce or reject these results. The Israeli Trauma Group A Acker, N Aviran, H Bahouth, A Bar, A Becker, M Ben Ely, D Fadeev, I Grevtsev, I Jeroukhimov, A Kedar, A Korin, A Lerner, M Qarawany, AD Schwarz, W Shomar, D Soffer, M Stein, M Venturero, M Weiss, O Yaslowitz, I Zoarets.
A Acker, N Aviran, H Bahouth, A Bar, A Becker, M Ben Ely, D Fadeev, I Grevtsev, I Jeroukhimov, A Kedar, A Korin, A Lerner, M Qarawany, AD Schwarz, W Shomar, D Soffer, M Stein, M Venturero, M Weiss, O Yaslowitz, I Zoarets.
|
Does the application of autologous injectable Platelet-Rich Fibrin (i-PRF) affect the patient’s daily performance during the retraction of upper canines? A single-centre randomized split-mouth controlled trial | 9a384581-b649-4ca8-8d66-bf3c9786561a | 10656964 | Dental[mh] | The true efficacy of any treatments under question is something of great importance, definitely. However, it is not only the efficacy of the intervention but rather also the resulting Quality of Life (QoL) that matters. The higher related QoL levels mean that the procedure will meet the patients’ acceptance and the healing will be promoted. Patient-Reported Outcome Measures (PROMs) are real authentic manifestations of the QoL concept, representing the resulting interaction between the patients and the techniques or materials used. Orthodontic treatment of adult patients is frequently challenging because of their higher expectations regarding aesthetics and comfort. Moreover, fast completion of the orthodontic treatment is one of priorities of adults especially in situations when tooth extraction is necessary . This requirement is difficult to meet due to the decreased bone turnover and increased bone maturity in adults . Therefore, methods of acceleration of the Orthodontic Tooth Movement (OTM) have become the subject of much research in the last decades. Most of them target the process of remodelling of the alveolar bone and periodontal ligament (PDL). They aim, also, to avoid the adverse effects resulting from long and extended orthodontic treatment durations, such as root resorption, white spot lesions, caries, and periodontal problems [ – ] These methods are divided into surgical (osteotomy, corticotomy, corticision, piezocision, micro-osteo-perforation, dentoalveolar distraction osteogenesis, periodontal distraction and surgery first) and non-surgical (self-ligating brackets, medications, photo-biomodulation, electromagnetic field, electrical currents and vibration) [ – ] with the former regarded as more effective . However, they are invasive in nature and are usually associated with pain, oedema, and occasional loss in periodontal support of the tooth. All of the aforementioned can be critical deterrents to the orthodontic treatment and are in fact common causes for treatment discontinuation . Consequently, biomaterials such as platelet-rich plasma (PRP) and platelet-rich fibrin (PRF) have been recently introduced as better alternatives to the surgical interventions and their efficacy has been tested in accelerating orthodontic tooth movement in previous studies trying to overcome the invasive surgical hazards [ – ]. Their potential is mainly attributed to the high contents of growth factors they have which play crucial roles in wound healing and bone regeneration . These biomaterials have been widely used in both dental and medical fields because of their therapeutic effects . Unfortunately, researchers have studied the aforenamed alternative techniques without paying enough attention to their detrimental effects, nor pain levels that could threaten patients’ cooperation in terms of attending their appointments, taking care of their appliances, following the clinician’s instructions and can finally lead them to refuse or cease the orthodontic treatment . Although injectable platelet-rich fibrin (i-PRF) is considered to be a promising biomaterial , the scientific evidence is still lacking in terms of patient-reported outcome measures (PROMs) in the orthodontic field according to the systematic review that pointed out only a single study which recorded pain scores combined when using the PRP injection. Moreover, a later systematic review and meta-analysis revealed a huge lack in the studies dealing with the pain and discomfort associated with the use of condensed platelets (platelet-rich concentrates) and there are only 3 articles that discussed the accompanied pain and it was addressed on its own (without any other variables) [ , , ]. On top of that, these 3 papers were based on the use of PRP (not i-PRF). No previous studies have investigated the levels of pain, discomfort, swelling, chewing difficulties, swallowing difficulties, jaw movement limitation, satisfaction, which experience is harder, patients’ recommendations, all, and not only “just pain”. In other words, the PROMs associated with the application of the injectable platelet-rich fibrin (i-PRF) while retracting upper canines in class II division I patients as the main aim of the study. Accordingly, having insignificant differences between both sides regarding the measured variables referred to the null hypothesis.
Study design and sample It was a randomized split mouth clinical trial with 1:1 allocation ratio to intervention and control sides conducted in the Department of Orthodontics at the Faculty of Dentistry, Damascus University. The study was approved by the institutional review board (IRB) and ethical review committee of Damascus University (N. 2473). The CONSORT (Consolidated Standards of Reporting Trials) statement was followed as a guide for this study which was registered at Clinicaltrials.gov with the identifier number (NCT03399422). The recruited study participants were patients presenting to the Department of Orthodontics, Faculty of Dentistry, Damascus University. Patients’ inclusion and follow-up is shown in the CONSORT flow chart (Fig. ). The sample size was calculated to investigate the significant differences in pain perception between both sides based on a split-mouth design previous study and with 90% of study power and 5% of permissible α error using G*Power 3.1.3 software (Heinrich-Heine-Universitӓt, Düsseldorf, Germany). Therefore, 21 participants were recruited in this study. The total duration of the study was 10 months. Inclusion and exclusion criteria Inclusion criteria were : patients aged 16–28 years with class II division I malocclusions and mild to moderate skeletal discrepancies (ANB ≤ 7) requiring bilateral maxillary first premolars extractions; crowding ≤ 3; OJ < 10; no tooth loss except third molars; normal to vertical growth pattern; no transverse discrepancy; no systemic diseases; good oral hygiene (Gingival Index < 1, Plaque Index < 1, both according to Silness and Löe) , and normal platelet count. Exclusion criteria were : patients taking anticoagulants or medications that interfere with orthodontic tooth movement (NSAIDS, Bisphosphonates, and Corticosteroid), smokers, bony defects observed radiographically, previous history of orthodontic treatment. The purpose and methods of the study were comprehensively clarified to the potential participants who met the inclusion criteria. After ensuring the patients’ compliance and acceptance, the patients and/or their legal guardians for those who were under 18 years old, were asked to sign an informed consent. Randomization and blinding Computer-generated random numbers were used for randomization of the right and left extraction sides to either the experimental side (i-PRF) and control side (non i-PRF), with a 1:1 allocation ratio. The randomization was done by a research assistant who was not involved in this trial. Blinding was only applicable in the data analyses phase. The intervention The intervention was an injection of i-PRF on one of the extraction sides in a predetermined moment of the standardized orthodontic treatment, which comprised; fixed orthodontic appliances with MBT.022 inch slot (Votion, Ortho Technology, West Columbia, SC, USA), initial archwire sequence was: 0.014-in NiTi (or 0.016-in NiTi depending on the amount of crowding), 0.016*0.022-in NiTi, and, 0.017*0.025-in NiTi; extraction of the maxillary first premolars just before the insertion of 0.019*0.025-in SS archwire; canine retraction was achieved with closed nickel-titanium coil springs with 150 g of force per side (Fig. ); 20 mL blood were drawn from each patient and centrifuged (700 rpm within 3 min) by using (HW6C, HWLAB® Mini Combo Centrifuge, ZheJiang, China) and dry sterile glass tubes without any additives to get approximately 3 mL of the yellow orange upper portion (the i-PRF). The i-PRF was injected twice: at the moment of initiating the canine retraction and 1 month later, both at the area of extracted upper first premolar of the intervention side after topical anesthetization with 8% lidocaine spray. Two mL were injected on the buccal and 1 mL on the palatal intervention sides the same way as in the method used for local infiltrative anaesthesia (Fig. ). No medications were prescribed following the injection. All clinical procedures—orthodontic treatments and injections—were done by the same investigator (TZ). Questionnaires Two questionnaires (Q1 and Q2) were administered following a comprehensive explanation of the purpose of the survey: the Q1 aimed to record the levels of pain, discomfort, swelling, difficulties in mastication, difficulties in swallowing and jaw movement restriction using a 100-mm Visual Analogue Scale (VAS) – Fig. , where 0 mm denoted the most favourable situation (e.g. no pain) and 100 mm denoted the least favourable situation (e.g. the worst pain ever). All participants were asked to fill in the Q1 at 5 time points: 1 h (T1), 2 h (T2), 6 h (T3), 24 h (T4) and 48 h (T5) after the 2 nd i-PRF injection. The Q2 questionnaire consisted of 3 questions aiming to assess patient’s satisfaction with the procedure and a probability to recommend this procedure to patient’s family and/or friends. The Q2 was administered at the end of canine retraction phase (Table ). A 100 mm VAS was used, where 0 mm denoted the least satisfaction (e.g. not happy with the experience at all) and 100 mm denoted complete satisfaction (e.g. totally happy with it)—Fig. . Statistical analysis Statistical analysis was accomplished using the IBM SPSS version 25 (SPSS Inc., Chicago, III, USA), probability values equal or less than 0.05 were considered significant. The analysis was performed by one of the researchers who was blinded to study results. Non-parametric tests were used to analyse the data that were not normally distributed and in particular Wilcoxon Signed Rank Test to compare the levels of pain, discomfort, swelling and chewing difficulties between the two sides. Friedman’s Test was the selected test for detecting variables’ changes over time. The Post-hoc Wilcoxon Matched-Pairs Signed-Rank Tests were applied when any of the results were significant. Bonferroni Correction was adopted for the purpose of multiplicity of tests.
It was a randomized split mouth clinical trial with 1:1 allocation ratio to intervention and control sides conducted in the Department of Orthodontics at the Faculty of Dentistry, Damascus University. The study was approved by the institutional review board (IRB) and ethical review committee of Damascus University (N. 2473). The CONSORT (Consolidated Standards of Reporting Trials) statement was followed as a guide for this study which was registered at Clinicaltrials.gov with the identifier number (NCT03399422). The recruited study participants were patients presenting to the Department of Orthodontics, Faculty of Dentistry, Damascus University. Patients’ inclusion and follow-up is shown in the CONSORT flow chart (Fig. ). The sample size was calculated to investigate the significant differences in pain perception between both sides based on a split-mouth design previous study and with 90% of study power and 5% of permissible α error using G*Power 3.1.3 software (Heinrich-Heine-Universitӓt, Düsseldorf, Germany). Therefore, 21 participants were recruited in this study. The total duration of the study was 10 months.
Inclusion criteria were : patients aged 16–28 years with class II division I malocclusions and mild to moderate skeletal discrepancies (ANB ≤ 7) requiring bilateral maxillary first premolars extractions; crowding ≤ 3; OJ < 10; no tooth loss except third molars; normal to vertical growth pattern; no transverse discrepancy; no systemic diseases; good oral hygiene (Gingival Index < 1, Plaque Index < 1, both according to Silness and Löe) , and normal platelet count. Exclusion criteria were : patients taking anticoagulants or medications that interfere with orthodontic tooth movement (NSAIDS, Bisphosphonates, and Corticosteroid), smokers, bony defects observed radiographically, previous history of orthodontic treatment. The purpose and methods of the study were comprehensively clarified to the potential participants who met the inclusion criteria. After ensuring the patients’ compliance and acceptance, the patients and/or their legal guardians for those who were under 18 years old, were asked to sign an informed consent.
Computer-generated random numbers were used for randomization of the right and left extraction sides to either the experimental side (i-PRF) and control side (non i-PRF), with a 1:1 allocation ratio. The randomization was done by a research assistant who was not involved in this trial. Blinding was only applicable in the data analyses phase.
The intervention was an injection of i-PRF on one of the extraction sides in a predetermined moment of the standardized orthodontic treatment, which comprised; fixed orthodontic appliances with MBT.022 inch slot (Votion, Ortho Technology, West Columbia, SC, USA), initial archwire sequence was: 0.014-in NiTi (or 0.016-in NiTi depending on the amount of crowding), 0.016*0.022-in NiTi, and, 0.017*0.025-in NiTi; extraction of the maxillary first premolars just before the insertion of 0.019*0.025-in SS archwire; canine retraction was achieved with closed nickel-titanium coil springs with 150 g of force per side (Fig. ); 20 mL blood were drawn from each patient and centrifuged (700 rpm within 3 min) by using (HW6C, HWLAB® Mini Combo Centrifuge, ZheJiang, China) and dry sterile glass tubes without any additives to get approximately 3 mL of the yellow orange upper portion (the i-PRF). The i-PRF was injected twice: at the moment of initiating the canine retraction and 1 month later, both at the area of extracted upper first premolar of the intervention side after topical anesthetization with 8% lidocaine spray. Two mL were injected on the buccal and 1 mL on the palatal intervention sides the same way as in the method used for local infiltrative anaesthesia (Fig. ). No medications were prescribed following the injection. All clinical procedures—orthodontic treatments and injections—were done by the same investigator (TZ).
Two questionnaires (Q1 and Q2) were administered following a comprehensive explanation of the purpose of the survey: the Q1 aimed to record the levels of pain, discomfort, swelling, difficulties in mastication, difficulties in swallowing and jaw movement restriction using a 100-mm Visual Analogue Scale (VAS) – Fig. , where 0 mm denoted the most favourable situation (e.g. no pain) and 100 mm denoted the least favourable situation (e.g. the worst pain ever). All participants were asked to fill in the Q1 at 5 time points: 1 h (T1), 2 h (T2), 6 h (T3), 24 h (T4) and 48 h (T5) after the 2 nd i-PRF injection. The Q2 questionnaire consisted of 3 questions aiming to assess patient’s satisfaction with the procedure and a probability to recommend this procedure to patient’s family and/or friends. The Q2 was administered at the end of canine retraction phase (Table ). A 100 mm VAS was used, where 0 mm denoted the least satisfaction (e.g. not happy with the experience at all) and 100 mm denoted complete satisfaction (e.g. totally happy with it)—Fig. .
Statistical analysis was accomplished using the IBM SPSS version 25 (SPSS Inc., Chicago, III, USA), probability values equal or less than 0.05 were considered significant. The analysis was performed by one of the researchers who was blinded to study results. Non-parametric tests were used to analyse the data that were not normally distributed and in particular Wilcoxon Signed Rank Test to compare the levels of pain, discomfort, swelling and chewing difficulties between the two sides. Friedman’s Test was the selected test for detecting variables’ changes over time. The Post-hoc Wilcoxon Matched-Pairs Signed-Rank Tests were applied when any of the results were significant. Bonferroni Correction was adopted for the purpose of multiplicity of tests.
Twenty-one patients aged 16–28 years (mean age 20.9, SD = ± 3.9 years) participated in this study. There were no changes to the study protocol after trial commencement. The main outcome measure—the time of canine retraction—was described in our previous study. In summary, i-PRF injection failed to reduce the duration of canine retraction because it significantly accelerated upper canine retraction only during the 2 nd month of the retraction period (an acceleration rate 31.7%), while there were no differences in the rate of canine movement on intervention and control sides in the remaining months of retraction phase . Descriptive statistics of responses to the Q1 questionnaire are shown in (Tables and ). Please note that, since all values taken at T5 (48 h after injection) were 0, they were not tabulated. Table demonstrated that the mean values of pain, discomfort, swelling, and difficulties in chewing were higher on the interventional side when compared to the control at T1, T2, and T3. However, statistically significant differences were only at T1, T2, T3 for pain ( P < 0.001, P = 0.002, and P = 0.023, respectively) and swelling levels ( P < 0.001, P = 0.001, and P = 0.015, respectively), while discomfort and chewing difficulties were different between sides at T1 and T2 only ( P < 0.001, P = 0.003 for discomfort; P = 0.016, P = 0.017 for chewing difficulties). At T4 the differences between both sides were not statistically significant ( P > 0.05). No significant differences were found in terms of age and gender relationships with pain and discomfort after the implementation of Spearman Correlation Coefficient and Kruskall-Wallis Tests respectively (Table ). Difficulties in swallowing and jaw movement limitation were comparable at all timepoints (Table ). It means that injection of i-PRF didn’t affect these parameters at any point in time. Regarding longitudinal (i.e. within the group) changes of variables between the 4 time-points, the levels of pain, discomfort, swelling, and difficulty in chewing were significantly different between points in time ( P -value < 0.001) according to Friedman’s Test. Post-hoc pairwise comparisons with Bonferroni’s adjustment of alpha level for values that showed significant differences are presented in Table . On the contrary, the difficulties in swallowing as well as jaw movement limitation showed insignificant differences ( P -value > 0.05) (Table ). Satisfaction levels of the injection were promising and decent enough (75.71 ± 27.85) as shown in Table . The percentage of the patients in terms of feeling disturbed from the extraction of upper premolars was higher 80.95% compared to the injection 14.29% while 4.76% admitted that both procedures were equally annoying and unpleasant. Moreover, this technique was recommended to their friends by the majority (85.71%) of participants.
The application of platelet rich fibrin to facilitate orthodontic tooth movement has not been thoroughly investigated regarding its effect on patients’ daily activities. To the best of our knowledge, there were no previous studies that evaluated patients’ perceptions associated with i-PRF injection during orthodontic treatment. Hence, in this study we assessed pain, discomfort, swelling, and difficulty in mastication and swallowing as well as limitation in jaw movement after application of i-PRF during retraction of the upper canines. Moreover, we evaluated patients’ satisfaction with the procedure. We used VAS to measure the variables that are subjective in nature because it is a very reliable and easy method in addition to its wide usage in other previous studies [ , , ]. The low-speed centrifugation protocol (700 rpm within 3 min) was adopted to obtain the i-PRF [ , , ] because it has several advantages such as, higher rates of regenerative cells and growth factors . In addition, it provides more natural and gradual transformation leading to increased cytokines integrity as well as leukocyte proportions in the fibrin network which in turn increases the duration of cytokines secretion and growth factors release and subsequently the i-PRF efficiency when compared to the conventional form of PRF [ – ]. Furthermore, the injectable version of the PRF enabled us to apply it immediately prior to the initiation of canine retraction in an attempt to get the best possible outcomes. PRF is the second generation of platelet concentrates that has the advantage of gradual release of growth factors that last up to 28 days [ , , ]. As a result, i-PRF was injected twice with a one-month interval unlike other studies in which different administration frequencies were followed, for example: Karakasli et al. and Erdur et al. applied the i-PRF twice with a 2-week interval in maxillary incisor and canine retraction cases over a follow-up period of one and three months respectively, unlike Karsi and Baka’s study in which the injections of i-PRF were repeated after 4 and 8 weeks of the first delivery . On the contrary, Ibrahim et al. injected the i-PRF only once during upper canine distalization, likewise Rokia et al. administered the i-PRF at the beginning of levelling and alignment stage with no repetitions. Patients’ responses regarding pain, discomfort, swelling, and difficulty in mastication and swallowing as well as limitation in jaw movement after the second injection were used for analysis because we wanted patients to have prior experience with injection. In this way, we wanted to reduce the role of stress during the response, especially immediately after the injection. By analysing the responses after the second injection, we obtained more reliable information about how patients felt. Additionally, none of the participants used any kind of analgesics what ensured the reliability of the answers. Despite the differences between the experimental and control sides in the perception of injection-related “stress” for most variables and at most assessment timepoints, the mean values of the studied variables were relatively low even 1 h after injection (T1). For example, our data demonstrated that the levels of pain on the experimental side were statistically significantly higher than on the control one at T1, T2, T3 ( P < 0.05). Nevertheless, “unpleasantness” caused by pain at T1 was judged as relatively mild. In general, increased pain level could be related to the gingival trauma after injection as well as the simultaneous application of orthodontic force – we used coil springs to move canines—that might cause some irritation and discomfort . Moreover, the general fear of using needles and syringes in the treatment process could be an inevitable fact in any society . Quickly decreasing levels of pain, discomfort and other variables could be caused by the anti-inflammatory properties of the substance that has been proven by many studies in which they used the PRF as a palliative material that help reduce the associated post-operative pain during invasive surgical interventions or third molar extractions . Our results agreed with the findings of the study by Liou who assessed the associated pain and discomfort after application of PRP. It is worth mentioning, however, before going on further, that Liou didn’t follow a systematic and precisely clear protocol in his study. Liou pointed out that 85% of the participants experienced low to moderate levels of pain and discomfort within the first 6–12 h of the injection. A comparison with our results is not straightforward because of the differences in the material used, the methodology as well as the type of tooth movement (en-masse retraction, mesial movement of molars and levelling and alignment versus canine retraction in ours). Our study could be comparable to El-Timamy et al.’s study who adopted the VAS to measure the variables . They found that pain levels were statistically insignificant between the experimental and control sides in their split-mouth study, which does not agree with our results. The difference could result from injecting the intervention side with PRP while the control with calcium chloride, indicating that pain sensation was related to the injection procedure (needle itself) rather than the material. Pain levels were higher in the first, fourth- and seventh-weeks post injection and this is because of the different applied protocol and the frequency of administrations. Pain levels accompanied by PRP injection have been studied by El Gazzar et al. , at different time points 1 h, 6 h, 12 h and 24 h following upper canines’ retraction in a split-mouth design study, they revealed that the values were higher in the study group compared to controls during all the assessed points. However, no pain was detected bilaterally after 24 h which is in accordance with our research. The submucosal tunnel technique injection of PRP has been adopted for the purpose of en-masse retraction in Chandak and Patil’s study who demonstrated increased 24-h-pain levels in the intervention group versus control, unlike after 7 days which showed insignificant difference between both groups, the elevated pain values after 24 h (that contradicted our results) could be explained by the method used for PRP administration which can be considered painful. In our study, the intensity of pain and discomfort observed at T1 were similar to values detected by Kuroda et al. who registered the highest level of pain and discomfort after one hour of the mini-screw placement . Despite that females were reported to have different pain profiles when compared to males no statistically significant correlations were found between age/gender and their effect on pain/discomfort level in our study. Swelling levels were greater on the experimental side within the first six hours ( P -value ≤ 0.01) than the control, then decreased sharply after 24 h and this might be attributed to the oedema associated with the injection and the submucosal accumulation of the material that diminished gradually. This is consistent with Liou’s study who showed that 85% of patients suffered from oedema during 6 to 12 h post injection. Sreenivasagan et al. have evaluated in their study the associated discomfort with different sites of mini-screw insertion and their detrimental effects on chewing and all jaw functions using Wong-Baker Faces Pain rating scale , and they found out that difficulty in eating was experienced the most at the infra-zygomatic crest mini-screw region followed by the palatal ones and buccal shelf mini-implants. The interradicular mini-implants had the least score regarding chewing difficulties and the all assessed variables, meaning that they are characterized by the highest acceptability as well as the minimum interferences with daily functions and above all they are preferable over other mini-screw types. Likewise, statistically significant mastication difficulties were detected in our investigation and this could be the result of the accompanying pain resulting from the injection in addition to the discomfort caused by the springs besides food stickiness between the helices of the coil spring that is critical to be cleaned , the later reduction and negligible differences were due to the decreased associated levels of pain and discomfort and because patients got used to the coils. Our findings regarding swallowing and jaw movement limitations demonstrated that there were no statistically significant increases at all time intervals ( P -value > 0.05). However, the slightly elevated values during the first two hours were explained by the correlated pain and discomfort that quickly faded away, and to the conservative nature of the process unlike the cortical bone puncturing techniques that cause serious amounts of annoyance and require some time to be healed when compared to the injection only. Satisfaction rates were recorded after the second injection as the patient had a complete perception of the injection procedures and was accustomed to the presence of the coil springs. Fairly good incidence of satisfaction has been reported (75.7%) amongst participants, and this can be attributed to the little-invasive approach which resembles the infiltrative nature of the anaesthesia and is of limited aggressiveness. Eighty-one percent of the patients admitted that extraction of the premolars was more annoying than the injection itself. Our findings were in line with Ganzer et al.’s study who stated that patients’ pain perception after premolar extraction was more negative when compared to the application of miniscrews . Patients were asked to report whether or not they recommend this procedure to a friend and 85.7% of the answers were in favour of recommendation which reflects the good acceptance of this technique and its limited short-term disturbances. The split-mouth design and not having placebo injections might be considered as the main limitations of this study that could somehow confound our findings. Still, that placebo option would have been unethical and would have caused possible unjustifiable pain from a therapeutic point of view, had it been applied, and hence it was abandoned. Long-term effects of the injection on patients’ perceptions could be addressed when having prolonged follow-up periods, however, all values dropped down to zero at the fifth evaluation time point. Apparently, blinding was only ensured when analysing the results because it was not possible to be employed whether to the main investigator who conducted the research or to the patients who received the injections.
The results of this study support the following findings: Injectable platelet-rich fibrin (i-PRF) doesn’t cause high levels of discomfort within the first day following the injection and it can be considered as a non-invasive technique that causes minimum side effects. Platelet-rich fibrin injections are accompanied initially by low to medium pain, discomfort, swelling, eating and swallowing difficulties, jaw movement restriction only 6 h post application. Patients treated with this method can get back to their normal life on the second day and the associated values of pain and discomfort drop down to zero by that time meaning that the unfavourable effects of the injection are temporary.
|
Machine Learning: An Overview and Applications in Pharmacogenetics | 5059528a-d411-4f91-a1af-e7b7666803d6 | 8535911 | Pharmacology[mh] | Pharmacogenetics aims to assess the interindividual variations in DNA sequence related to drug response . Gene variations indicate that a drug can be safe for one person but harmful for another. The overall prevalence of adverse drug reaction-related hospitalization varies from 0.2% to 54.5% . Pharmacogenetics may prevent drug adverse events by identifying patients at risk in order to implement personalized medicine, i.e., a medicine tailored focused on genomic context of each patient. The need to obtain increasingly accurate and reliable results, especially in pharmacogenetics, is leading to a greater use of sophisticated data analysis techniques based on experience called Machine Learning (ML). ML can be defined as the study of computer algorithms that improve automatically through experience. According to Tom M. Mitchell “A computer program is said to learn from experience E with respect to some class of tasks T and performance measure P if its performance at tasks in T, as measured by P, improves with experience E.” . According to the final goal, ML can be defined as Supervised (SML) or as Unsupervised (UML). SML techniques are applied when prediction is the focus of the research. On the other hand, UML techniques are used when the outcome is not known, and the goal of the research is unveiling the underlying structure of the data. This narrative review aims to provide an overview of the main SML and UML techniques and their applications in pharmacogenetics over the past 10 years. The following search strategy, with a filter on the last 10 years, was run on PubMed “machine learning AND pharmacogenetics” . The paper is organized as follows: illustrates the SML approach and its application on pharmacogenetics; reports the principal UML approach and its application on pharmacogenetics; is devoted to discussion.
Several SML techniques have been implemented. They can be classified into two categories: regression methods and classification methods . 2.1. Regression Methods The simplest regression method is linear regression. A linear model assumes a linear relationship between the input variables ( X ) and an output variable ( Y ) . Standard formulation of linear regression models with standard estimation techniques is subject to four assumptions: (i) linearity of the relationship between X and expected value of Y ; (ii) homoscedasticity, i.e., the residual variance is the same for any value of X ; (iii) independence of the observations and (iv) normality: the conditional distribution of Y | X is normal. To overcome the linear regression model assumptions, the generalized linear models (GLM) have been developed. The GLM generalize linear regression by allowing the linear model to be related to the response variable via a link function : E ( Y | X ) = μ i = g − 1 ( x i T β ) where μ i is the response function, and g is the link function. In order to address more complex problems, sophisticated penalized regression models have been developed allowing to overcome problems such as multicollinearity and high dimensionality. In particular, Ridge regression is employed when problems with multicollinearity occur, and it consists of adding a penalization term to the loss function as follows: arg min β ‖ y − X β ‖ + λ ‖ β ‖ 2 2 where λ is the amount of penalization (tuning parameter), and ‖ β ‖ 2 2 is the norm 2 of the βs, i.e., ‖ β ‖ 2 2 = ∑ β i 2 . More recently, Tibshirani et al. introduced LASSO regression, an elegant and relatively widespread solution to carry out variable selection and parameter estimation simultaneously, also in high dimensional settings . In LASSO regression, the objective function to be minimized is the following: arg min β ‖ y − X β ‖ + λ ‖ β ‖ 1 where λ is the amount of penalization (tuning parameter), and ‖ β ‖ 1 is the norm 1 of the βs, i.e., ‖ β ‖ 1 = ∑ β i . Some issues concerning the computation of standard errors and inference have been recently discussed . A combination of LASSO and Ridge regression penalties leads to the Elastic Net (EN) regression: arg min β ‖ y − X β ‖ + λ 1 ‖ β ‖ 1 + λ 2 ‖ β ‖ 2 2 where λ 1 ‖ β ‖ 1 is the L1 penalty (LASSO), and λ 2 ‖ β ‖ 2 2 is the L2 penalty (Ridge). Regularization parameters reduce overfitting, decreasing the variance of the estimated regression parameters; the larger the λ , the more shrunken the estimate; however, more bias will be added to the estimates. Cross-Validation can be used to select the best value of λ to use in order to ensure the best model is selected. Another family of regression methods is represented by regression trees. A regression tree is built by splitting the whole data sample, constituting the root node of the tree, into subsets (which constitute the successor children), based on different cut-offs on the input variables . The splitting rules are based on measures of prediction performances; in general, they are chosen to minimize the residual sum of squares: R S S = ∑ i = 1 n ( y i − y i ^ ) 2 The pseudo algorithm works as follows: Start with a single node containing all the observations. Calculate y i ^ and RSS ; If all the observations in the node have the same value for all the input variables, stop. Otherwise, search over all binary splits of all variables for the one which will reduce RSS as much as possible; Restart from step 1 for each new node. Random forests (RF) are an ensemble learning method based on a multitude of decision trees; to make a prediction for new input data, the predictions obtained from each individual tree are averaged . RuleFit is another ensemble method that combines regression tree methods and LASSO regression . The structural model takes the form: F ( x ) = a 0 + ∑ m = 1 M a m f m ( x ) where M is the size of the ensemble and each ensemble member (“base learner”), and f m ( x ) is a different function (usually the indicator function) of the input variables x. Given a set of base learners f m ( x ) , the parameters of the linear combination are obtained by { a ^ m } 0 M = arg min { a m } 0 M ∑ i = 1 N L ( y i , F ( x ) ) + λ ∑ m = 1 M | a m | where L indicates the loss function to minimize. The first term represents the prediction risk, and the second part penalizes large values for the coefficients of the base learners. Support Vector Regression (SVR) is an optimization problem of a convex loss function to be minimized to find, in such a way, the flattest zone around the function (known as the tube) that contains the most observations . The convex optimization, which has a unique solution, is solved, using appropriate numerical optimization algorithms. The function to be minimized is the following: 1 2 ‖ β ‖ 2 2 + C ∑ i = 1 N V ϵ ( y i − x i β i ) with V ϵ ( r ) = { 0 , | r | < ϵ | r | − ϵ , o t h e r w i s e and C is an additional hyperparameter. The greater is C , the greater is our tolerance for points outside ϵ . 2.2. Classification Methods Classification methods are applied when the response variable is binary or, more generally, categorical. Naive Bayes (NB) is a “probabilistic classifier” based on the application of the Bayes’ theorem with strong (naïve) independence assumptions between the features . Indeed, NB classifier estimates the class C of an observation by maximizing the posterior probability: arg max C p ( x | C ) p ( C ) p ( x ) Support Vector Machine (SVM) builds a model that assigns new examples to one category or the other, making it a non-probabilistic binary linear classifier . The underlying idea is to find the optimal separating hyperplane between two classes, by maximizing the margin between the closest points of these two classes. To find the optimal separating hyperplane it needs to minimize: min β 1 2 β T β s u b j e c t t o y i ( x i T β ) ≥ 1 . f o r i = 1 , … , n A quadratic programming solver is needed to optimize the aforementioned problem. The k-nearest neighbor (KNN) is a non-parametric ML method which can be used to solve classification problems . KNN assigns a new case into the category that is most similar to the available categories. Given a positive integer k, KNN looks at the k observations closest to a test observation x 0 and estimates the conditional probability that it belongs to class j using the formula P ( Y = j | X = x 0 ) = 1 k ∑ i ∈ N 0 I ( y i = j ) where N 0 is the set of k -nearest observations, and I is the indicator function, which is 1 if a given observation is a member of class j and 0 otherwise. Since the k nearest points are needed, the first step of the algorithm is calculating the distance between the input data points. Different distance metrics can be used; the Euclidean distance is the most used. A Neural Network (NN) is a set of perceptrons (artificial neurons) linked together in a pattern of connections. The connection between two neurons is characterized by the connection weight, updated during the training, which measures the degree of influence of the first neuron on the second one . NN can be also applied in unsupervised learning. Strengths and limitations of each approach are summarized in . 2.3. Supervised Machine Learning Approaches in Pharmacogenetics Recent studies in pharmacogenetics aiming to predict drug response used a SML approach with satisfactory results . In particular, a study assessing the pharmacogenetics of antidepressant response compared different supervised techniques such as NN, recursive partitioning, learning vector quantization, gradient boosted machine and random forests. Data involved 671 adult patients from three European studies on major depressive disorder. The best accuracy among the tested models was achieved by NN . Another study on 186 patients with major depressive disorder aimed to predict response to antidepressants and compared the performance of RT and SVM. SVM reported the best performance in predicting the antidepressants response. Moreover, in a second step of the analysis, authors applied LASSO regression for feature selection allowing the selection of 19 most robust SNPs. In addition, application of SML allowed to distinguish remitters and non-remitters to antidepressants . A field of pharmacogenetics where SML techniques find wide application is the study of the response to anti-cancer drugs. In this regard, EN, SVM and RF reported excellent accuracy, generalizability and transferability . Studies on warfarin dosing applied different SML techniques (NN, RIDGE, RF, SVR and LASSO) showing a significant improvement in the prediction accuracy compared to standard methods . Another study on warfarin stable dosage prediction using seven SML models (multiple linear regression, NN, RT, SVR and RF) showed that multiple linear regression may be still the best model in the study population . A comparative study on prediction of various clinical dose values from DNA gene expression datasets using SML, such as RTs and SVR, reported that the best prediction performance in nine of 11 datasets was achieved by SVR . Recently, an algorithm “AwareDX: Analysing Women At Risk for Experiencing Drug toxicity” based on RF was developed for predicting sex differences in drug response, demonstrating high precision .
The simplest regression method is linear regression. A linear model assumes a linear relationship between the input variables ( X ) and an output variable ( Y ) . Standard formulation of linear regression models with standard estimation techniques is subject to four assumptions: (i) linearity of the relationship between X and expected value of Y ; (ii) homoscedasticity, i.e., the residual variance is the same for any value of X ; (iii) independence of the observations and (iv) normality: the conditional distribution of Y | X is normal. To overcome the linear regression model assumptions, the generalized linear models (GLM) have been developed. The GLM generalize linear regression by allowing the linear model to be related to the response variable via a link function : E ( Y | X ) = μ i = g − 1 ( x i T β ) where μ i is the response function, and g is the link function. In order to address more complex problems, sophisticated penalized regression models have been developed allowing to overcome problems such as multicollinearity and high dimensionality. In particular, Ridge regression is employed when problems with multicollinearity occur, and it consists of adding a penalization term to the loss function as follows: arg min β ‖ y − X β ‖ + λ ‖ β ‖ 2 2 where λ is the amount of penalization (tuning parameter), and ‖ β ‖ 2 2 is the norm 2 of the βs, i.e., ‖ β ‖ 2 2 = ∑ β i 2 . More recently, Tibshirani et al. introduced LASSO regression, an elegant and relatively widespread solution to carry out variable selection and parameter estimation simultaneously, also in high dimensional settings . In LASSO regression, the objective function to be minimized is the following: arg min β ‖ y − X β ‖ + λ ‖ β ‖ 1 where λ is the amount of penalization (tuning parameter), and ‖ β ‖ 1 is the norm 1 of the βs, i.e., ‖ β ‖ 1 = ∑ β i . Some issues concerning the computation of standard errors and inference have been recently discussed . A combination of LASSO and Ridge regression penalties leads to the Elastic Net (EN) regression: arg min β ‖ y − X β ‖ + λ 1 ‖ β ‖ 1 + λ 2 ‖ β ‖ 2 2 where λ 1 ‖ β ‖ 1 is the L1 penalty (LASSO), and λ 2 ‖ β ‖ 2 2 is the L2 penalty (Ridge). Regularization parameters reduce overfitting, decreasing the variance of the estimated regression parameters; the larger the λ , the more shrunken the estimate; however, more bias will be added to the estimates. Cross-Validation can be used to select the best value of λ to use in order to ensure the best model is selected. Another family of regression methods is represented by regression trees. A regression tree is built by splitting the whole data sample, constituting the root node of the tree, into subsets (which constitute the successor children), based on different cut-offs on the input variables . The splitting rules are based on measures of prediction performances; in general, they are chosen to minimize the residual sum of squares: R S S = ∑ i = 1 n ( y i − y i ^ ) 2 The pseudo algorithm works as follows: Start with a single node containing all the observations. Calculate y i ^ and RSS ; If all the observations in the node have the same value for all the input variables, stop. Otherwise, search over all binary splits of all variables for the one which will reduce RSS as much as possible; Restart from step 1 for each new node. Random forests (RF) are an ensemble learning method based on a multitude of decision trees; to make a prediction for new input data, the predictions obtained from each individual tree are averaged . RuleFit is another ensemble method that combines regression tree methods and LASSO regression . The structural model takes the form: F ( x ) = a 0 + ∑ m = 1 M a m f m ( x ) where M is the size of the ensemble and each ensemble member (“base learner”), and f m ( x ) is a different function (usually the indicator function) of the input variables x. Given a set of base learners f m ( x ) , the parameters of the linear combination are obtained by { a ^ m } 0 M = arg min { a m } 0 M ∑ i = 1 N L ( y i , F ( x ) ) + λ ∑ m = 1 M | a m | where L indicates the loss function to minimize. The first term represents the prediction risk, and the second part penalizes large values for the coefficients of the base learners. Support Vector Regression (SVR) is an optimization problem of a convex loss function to be minimized to find, in such a way, the flattest zone around the function (known as the tube) that contains the most observations . The convex optimization, which has a unique solution, is solved, using appropriate numerical optimization algorithms. The function to be minimized is the following: 1 2 ‖ β ‖ 2 2 + C ∑ i = 1 N V ϵ ( y i − x i β i ) with V ϵ ( r ) = { 0 , | r | < ϵ | r | − ϵ , o t h e r w i s e and C is an additional hyperparameter. The greater is C , the greater is our tolerance for points outside ϵ .
Classification methods are applied when the response variable is binary or, more generally, categorical. Naive Bayes (NB) is a “probabilistic classifier” based on the application of the Bayes’ theorem with strong (naïve) independence assumptions between the features . Indeed, NB classifier estimates the class C of an observation by maximizing the posterior probability: arg max C p ( x | C ) p ( C ) p ( x ) Support Vector Machine (SVM) builds a model that assigns new examples to one category or the other, making it a non-probabilistic binary linear classifier . The underlying idea is to find the optimal separating hyperplane between two classes, by maximizing the margin between the closest points of these two classes. To find the optimal separating hyperplane it needs to minimize: min β 1 2 β T β s u b j e c t t o y i ( x i T β ) ≥ 1 . f o r i = 1 , … , n A quadratic programming solver is needed to optimize the aforementioned problem. The k-nearest neighbor (KNN) is a non-parametric ML method which can be used to solve classification problems . KNN assigns a new case into the category that is most similar to the available categories. Given a positive integer k, KNN looks at the k observations closest to a test observation x 0 and estimates the conditional probability that it belongs to class j using the formula P ( Y = j | X = x 0 ) = 1 k ∑ i ∈ N 0 I ( y i = j ) where N 0 is the set of k -nearest observations, and I is the indicator function, which is 1 if a given observation is a member of class j and 0 otherwise. Since the k nearest points are needed, the first step of the algorithm is calculating the distance between the input data points. Different distance metrics can be used; the Euclidean distance is the most used. A Neural Network (NN) is a set of perceptrons (artificial neurons) linked together in a pattern of connections. The connection between two neurons is characterized by the connection weight, updated during the training, which measures the degree of influence of the first neuron on the second one . NN can be also applied in unsupervised learning. Strengths and limitations of each approach are summarized in .
Recent studies in pharmacogenetics aiming to predict drug response used a SML approach with satisfactory results . In particular, a study assessing the pharmacogenetics of antidepressant response compared different supervised techniques such as NN, recursive partitioning, learning vector quantization, gradient boosted machine and random forests. Data involved 671 adult patients from three European studies on major depressive disorder. The best accuracy among the tested models was achieved by NN . Another study on 186 patients with major depressive disorder aimed to predict response to antidepressants and compared the performance of RT and SVM. SVM reported the best performance in predicting the antidepressants response. Moreover, in a second step of the analysis, authors applied LASSO regression for feature selection allowing the selection of 19 most robust SNPs. In addition, application of SML allowed to distinguish remitters and non-remitters to antidepressants . A field of pharmacogenetics where SML techniques find wide application is the study of the response to anti-cancer drugs. In this regard, EN, SVM and RF reported excellent accuracy, generalizability and transferability . Studies on warfarin dosing applied different SML techniques (NN, RIDGE, RF, SVR and LASSO) showing a significant improvement in the prediction accuracy compared to standard methods . Another study on warfarin stable dosage prediction using seven SML models (multiple linear regression, NN, RT, SVR and RF) showed that multiple linear regression may be still the best model in the study population . A comparative study on prediction of various clinical dose values from DNA gene expression datasets using SML, such as RTs and SVR, reported that the best prediction performance in nine of 11 datasets was achieved by SVR . Recently, an algorithm “AwareDX: Analysing Women At Risk for Experiencing Drug toxicity” based on RF was developed for predicting sex differences in drug response, demonstrating high precision .
Regarding UML, data-driven approaches by using clustering methods can be used to describe data with the aim of understanding whether observations can be stratified into different subgroups. Clustering methods can be divided into (i) combinatorial algorithms, (ii) hierarchical methods and (iii) self-organizing maps . 3.1. Combinatorial Algorithms In combinatorial algorithms, objects are partitioned in clusters trying to minimize a loss function, e.g., the sum of the within clusters variability. In general, the aim is to maximize the variability among clusters and to minimize the variability within clusters. K-means is considered the most typical representative of this group of algorithms. Given a set of input variables ( x 1 , x 2 , … , x n ) , k-means clustering aims to partition the n observations into k (≤n) sets S = { S 1 , S 2 , … , S k ) , minimizing the within-cluster variances. Formally, the objective function to be minimized is the following: L = ∑ i = 1 k ∑ x j ∈ S i | | x j − μ i | | 2 where μ i is the set of centroids in S i . The k-means algorithm starts with a first group of randomly selected centroids, which are used as starting points for every cluster, and then performs iterative calculations to optimize the positions of the centroids. In k-means clustering, the centroids μ i are the means of the cluster S i . The algorithm stops if there is no change in the centroid or if a maximum number of iterations has been reached . K-means is defined for quantitative variables and Euclidean distance metric; however, the algorithm can be generalized to any distance D. K-medoids clustering is a variant of K-means that is more robust to noises and outliers . K-medoids minimizes the sum of dissimilarities between points labeled to be in a cluster and a point designated as the center of that cluster (medoids), instead of using the mean point as the center of a cluster. 3.2. Hierarchical Methods Hierarchical clustering produces, as output, a hierarchical tree, where leaves represent objects to be clustered, and the root represents a super cluster containing all the objects . Hierarchical trees can be built by consecutive fusions of entities (objects or already formed clusters) into bigger clusters, and this procedure configures an agglomerative method; alternatively, consecutive partitions of clusters into smaller and smaller clusters configure a divisive method. Agglomerative hierarchical clustering produces a series of data partitions, P n , P n − 1 , … , P 1 , where P n consists of n singleton clusters, and P 1 is a single group containing all n observations. Basically, the pseudo algorithm consists in the following steps: Compute the distance matrix D; The most similar observations are merged in a first cluster; Update D; Steps 2 and 3 are repeated until all observations belong to a single cluster. One of the simplest agglomerative hierarchical clustering methods is the nearest neighbor technique (single linkage), in which the distance between clusters ( r , s ) is computed as follows: D ( r , s ) = min i ∈ r , j ∈ s d ( i , j ) At each step of hierarchical clustering, the clusters r and s, for which D(r,s) is minimum, are merged. Therefore, the method merges the two most similar clusters. In the farthest neighbor (complete linkage), the distance between clusters ( r , s ) is defined as follows: D ( r , s ) = max i ∈ r , j ∈ s d ( i , j ) At each step of hierarchical clustering, the clusters r and s, for which D ( r , s ) is minimum, are merged. In the average linkage clustering, the distance between two clusters is defined as the average of distances between all pairs of objects, where each pair is made up of one object from each group. Divisive clustering is more complex than agglomerative clustering; a flat clustering method as “subroutine” is needed to split each cluster until each data have its own singleton cluster . Divisive clustering algorithms begin with the entire data set as a single cluster and recursively divide one of the existing clusters into two further clusters at each iteration. The pseudo algorithm consists in the following steps: All data are in one cluster; The cluster is split using a flat clustering method (K-means, K-medoids); Choose the best cluster among all the clusters to split that cluster through the flat clustering algorithm; Steps 2 and 3 are repeated until each data is in its own singleton cluster. 3.3. Self Organizing Maps Self-Organizing Maps (SOM) is the most popular artificial neural network algorithm in the UML category . SOM can be viewed as a constrained version of K-means clustering, in which the original high-dimensional objects are constrained to map onto a two-dimensional coordinate system. Let us consider n observations, M variables (dimensional space) and K neurons. Denoting by w i , i = 1 … K , the position of the neurons in the M-dimensional space, the pseudo-algorithm consists in: Choose random values for the initial weights w i ; Randomly choose an object i and find the winner neuron j whose weight w j is the closest to observation x i ; Update the position of w j moving it towards x i ; Update the positions of the neuron weights w h with h h ∈ N N j ( t ) (winner neighborhood); Assign each object i to a cluster based on the distance between observations and neurons. In more detail, the winner neuron is detected according to: w j = min i = 1 … K ‖ x − w i ‖ The winner weight updating rule is the following: w j ( t + 1 ) = w j ( t ) + η ( t ) ‖ x − w i ‖ where η ( t ) is the learning rate which decreases as iterations increases, and the N N j ( t ) updating rule is the following: w h ( t + 1 ) = w h ( t ) + f ( N N j ( t ) , t ) ‖ x − w h ‖ where the neighborhood function f ( N N j ( t ) , t ) gives more weight to neurons closer to the winner i than to those further away. Strengths and limitations of each approach are reported in . 3.4. Unsupervised Machine Learning Approaches in Pharmacogenetics Since the main goal in pharmacogenetics is to predict drug response, only few studies have used UML techniques . These techniques have mainly been used for data pre-processing to identify groups. Indeed, Tao et al., to balance the dataset of patients treated with warfarin and improve the predictive accuracy, proposed to solve the data-imbalance problem using a clustering-based oversampling technique. The algorithm detects the minority group, based on the association between the clinical features/genotypes and the warfarin dosage. A new synthetic sample, generated selecting a minority sample and finding k-nearest neighbors of the minority sample, was added to the dataset. Then, two SML techniques (RT and RF) were compared in order to predict the warfarin dose. Both models (RT and RF) achieve the same or higher performance in many cases . A study aiming to combine the effects of genetic polymorphisms and clinical parameters on treatment outcome in treatment-resistant depression used a two-step ML approach. First, patients were analyzed using a RF algorithm, while in a second step, data were grouped through cluster analysis. Cluster analysis allowed identifying 5 clusters of patients significantly associated with treatment response .
In combinatorial algorithms, objects are partitioned in clusters trying to minimize a loss function, e.g., the sum of the within clusters variability. In general, the aim is to maximize the variability among clusters and to minimize the variability within clusters. K-means is considered the most typical representative of this group of algorithms. Given a set of input variables ( x 1 , x 2 , … , x n ) , k-means clustering aims to partition the n observations into k (≤n) sets S = { S 1 , S 2 , … , S k ) , minimizing the within-cluster variances. Formally, the objective function to be minimized is the following: L = ∑ i = 1 k ∑ x j ∈ S i | | x j − μ i | | 2 where μ i is the set of centroids in S i . The k-means algorithm starts with a first group of randomly selected centroids, which are used as starting points for every cluster, and then performs iterative calculations to optimize the positions of the centroids. In k-means clustering, the centroids μ i are the means of the cluster S i . The algorithm stops if there is no change in the centroid or if a maximum number of iterations has been reached . K-means is defined for quantitative variables and Euclidean distance metric; however, the algorithm can be generalized to any distance D. K-medoids clustering is a variant of K-means that is more robust to noises and outliers . K-medoids minimizes the sum of dissimilarities between points labeled to be in a cluster and a point designated as the center of that cluster (medoids), instead of using the mean point as the center of a cluster.
Hierarchical clustering produces, as output, a hierarchical tree, where leaves represent objects to be clustered, and the root represents a super cluster containing all the objects . Hierarchical trees can be built by consecutive fusions of entities (objects or already formed clusters) into bigger clusters, and this procedure configures an agglomerative method; alternatively, consecutive partitions of clusters into smaller and smaller clusters configure a divisive method. Agglomerative hierarchical clustering produces a series of data partitions, P n , P n − 1 , … , P 1 , where P n consists of n singleton clusters, and P 1 is a single group containing all n observations. Basically, the pseudo algorithm consists in the following steps: Compute the distance matrix D; The most similar observations are merged in a first cluster; Update D; Steps 2 and 3 are repeated until all observations belong to a single cluster. One of the simplest agglomerative hierarchical clustering methods is the nearest neighbor technique (single linkage), in which the distance between clusters ( r , s ) is computed as follows: D ( r , s ) = min i ∈ r , j ∈ s d ( i , j ) At each step of hierarchical clustering, the clusters r and s, for which D(r,s) is minimum, are merged. Therefore, the method merges the two most similar clusters. In the farthest neighbor (complete linkage), the distance between clusters ( r , s ) is defined as follows: D ( r , s ) = max i ∈ r , j ∈ s d ( i , j ) At each step of hierarchical clustering, the clusters r and s, for which D ( r , s ) is minimum, are merged. In the average linkage clustering, the distance between two clusters is defined as the average of distances between all pairs of objects, where each pair is made up of one object from each group. Divisive clustering is more complex than agglomerative clustering; a flat clustering method as “subroutine” is needed to split each cluster until each data have its own singleton cluster . Divisive clustering algorithms begin with the entire data set as a single cluster and recursively divide one of the existing clusters into two further clusters at each iteration. The pseudo algorithm consists in the following steps: All data are in one cluster; The cluster is split using a flat clustering method (K-means, K-medoids); Choose the best cluster among all the clusters to split that cluster through the flat clustering algorithm; Steps 2 and 3 are repeated until each data is in its own singleton cluster.
Self-Organizing Maps (SOM) is the most popular artificial neural network algorithm in the UML category . SOM can be viewed as a constrained version of K-means clustering, in which the original high-dimensional objects are constrained to map onto a two-dimensional coordinate system. Let us consider n observations, M variables (dimensional space) and K neurons. Denoting by w i , i = 1 … K , the position of the neurons in the M-dimensional space, the pseudo-algorithm consists in: Choose random values for the initial weights w i ; Randomly choose an object i and find the winner neuron j whose weight w j is the closest to observation x i ; Update the position of w j moving it towards x i ; Update the positions of the neuron weights w h with h h ∈ N N j ( t ) (winner neighborhood); Assign each object i to a cluster based on the distance between observations and neurons. In more detail, the winner neuron is detected according to: w j = min i = 1 … K ‖ x − w i ‖ The winner weight updating rule is the following: w j ( t + 1 ) = w j ( t ) + η ( t ) ‖ x − w i ‖ where η ( t ) is the learning rate which decreases as iterations increases, and the N N j ( t ) updating rule is the following: w h ( t + 1 ) = w h ( t ) + f ( N N j ( t ) , t ) ‖ x − w h ‖ where the neighborhood function f ( N N j ( t ) , t ) gives more weight to neurons closer to the winner i than to those further away. Strengths and limitations of each approach are reported in .
Since the main goal in pharmacogenetics is to predict drug response, only few studies have used UML techniques . These techniques have mainly been used for data pre-processing to identify groups. Indeed, Tao et al., to balance the dataset of patients treated with warfarin and improve the predictive accuracy, proposed to solve the data-imbalance problem using a clustering-based oversampling technique. The algorithm detects the minority group, based on the association between the clinical features/genotypes and the warfarin dosage. A new synthetic sample, generated selecting a minority sample and finding k-nearest neighbors of the minority sample, was added to the dataset. Then, two SML techniques (RT and RF) were compared in order to predict the warfarin dose. Both models (RT and RF) achieve the same or higher performance in many cases . A study aiming to combine the effects of genetic polymorphisms and clinical parameters on treatment outcome in treatment-resistant depression used a two-step ML approach. First, patients were analyzed using a RF algorithm, while in a second step, data were grouped through cluster analysis. Cluster analysis allowed identifying 5 clusters of patients significantly associated with treatment response .
ML techniques are sophisticated methods that allow obtaining satisfactory results in term of prediction and classification. In pharmacogenetics, ML showed satisfactory performance in predicting drug response in several fields such as cancer, depression and anticoagulant therapy. RF proved to be the most frequently applied SML technique. Indeed, RF creates many trees on different subsets of the data and combines the output of all the trees, reducing variance and the overfitting problem. Moreover, RF works well with both categorical and continuous variables and is usually robust to outliers. Unsupervised learning still appears to not be frequently used. The potential benefits of these methods have yet to be explored; indeed, using UML as a preliminary step for the analysis of drug response could provide subgroups of response that are less arbitrary and more balanced than the standard definition of response. Although ML methods have shown superior performances with respect to classical ones, some limitations should be considered. Firstly, ML methods are particularly effective for analyzing large complex datasets. The amount of data should be large to provide enough information for solid learning. Indeed, the small sample size may potentially affect the stability and reliability of ML models. Moreover, due to algorithm complexity, other potential limitations could be overfitting, the lack of standardized procedures and the difficulty of interpreting data. The main strength of ML technique is to provide very accurate results, with a notable impact according to precision medicine principles. In order to overcome the possible limitations of ML, future directions should be focused on the creation of an open-source system to allow researchers to collaborate in sharing their data.
|
Poverty Dynamics and Caries Status in Young Adolescents | 9b251de6-13a5-4669-823e-b5e52a5d6e30 | 11754150 | Dentistry[mh] | Introduction Poverty is a major sociodemographic factor that influences people's health . Likewise, poverty adversely affects oral health . Deprivation during childhood impacts children's nutrition, parental knowledge and attitudes, increasing the risk of a higher prevalence and severity of oral diseases. In the long term, this can lead to pain, infection and a negative effect on oral health‐related quality of life . However, most of the studies conducted only use cross‐sectional measures to assess the influence of income on oral health. Research shows that the development of diseases is based on a continuum of exposures, therefore studying the dynamics of poverty throughout life could provide insights into the complexity of oral diseases and conditions . The lifecourse theory and its influence on health conditions have been extensively explained . In summary, exposure to adverse factors is continual and cumulative throughout life (cumulative risk model). For example, living in low socioeconomic circumstances for a longer time period may impose an increased risk of contracting many diseases . The accumulation may occur gradually, or there may be certain critical or sensitive periods (critical period model) during which people are more vulnerable to developing a disease. Furthermore, changes in social class (social mobility model) give rise to differences in health and disease profiles . The three causal models suggest that the health of individuals depends on the interaction of various protective and risk factors related to behavioural, biological, psychological and environmental influences throughout life . Clinically, tooth decay is caused by deficient oral hygiene and high intake of free sugars . However, it is important to acknowledge that oral health‐related behaviours are socially patterned, and these behaviours are not the only reason explaining different caries levels among the population . Both oral hygiene and nutrition are also subject to habit formation, and so far, it remains unknown whether changes in socioeconomic status (SES) may influence oral health‐related behaviours and subsequently oral health status. Only a few existing studies use different lifecourse theories on dental caries . Their findings show associations between poverty and unfavourable socioeconomic circumstances during childhood and dental caries later in life . These studies also suggest a dose–response relationship between the number of periods in social disadvantage and dental caries . However, there is no conclusive evidence of whether social mobility during childhood has an impact or not on dental caries in young people with early permanent dentition. The aim of this study was to investigate whether the timing and accumulation of periods in poverty are associated with dental caries in young adolescents. Furthermore, trajectories of poverty along the 13 years were determined and studied in relation with dental caries status at the age of 13 years.
Methods This study is embedded in the Generation R Study, which is an ongoing population‐based prospective cohort study from fetal life onwards, conducted in Rotterdam, the Netherlands. The Medical Ethics Committee of Erasmus Medical Centre approved this research (MEC 2015‐749‐NL55105.078.15). Participants and their parent(s) provided written informed consent before interviews and examinations were performed. The Generation R Study is multi‐disciplinary and focusses on diverse health outcomes from early life onward. Pregnant women registered as inhabitants in the municipality of Rotterdam between April 2002 and January 2006, were eligible to participate in the study. In total, 9778 mothers were enrolled at the start of the study and gave birth to 9749 live‐born children . For the current study, data collection took place during pregnancy (early, mid and late), childhood and early adolescence. During pregnancy and when the children were 2, 3, 6, 9 and 13 years old, information regarding household income was retrieved. At the age of 13 years, 6842 children participated in the study, and dental caries in the permanent dentition was assessed in 4086 children. Children who provided information about net household income at four time points at least (out of six) were included in the analysis ( n = 2913). In addition, siblings were excluded; therefore, the final study population comprised 2653 children (Figure ). 2.1 Dental Caries Intraoral photographs were taken to all participants who visited the research centre at the follow‐up phase of children aged 13 years. Children were instructed to brush their teeth for 2 min. A quantitative light fluorescence camera (Qraycam Pro; Inspektor Research Systems BV) was used to capture children's dentition in at least five white light and blue light photographs. The intraoral photographs were scored for dental caries by two trained researchers. Ten per cent of the participants were selected at random and evaluated double to calculate the intra‐rater reliability (weighted kappa = 0.94) and inter‐observer reliability (weighted kappa = 0.84) and both exhibited high agreement. The reliability of the quantitative light fluorescence camera for the assessment of the decayed, missing and filled teeth (DMFT) index was evaluated, and it showed good sensitivity and high specificity compared with the clinical visual tactile inspection . Dental caries in the permanent dentition was assessed at the age of 13 using the DMFT index . Decayed teeth were scored from caries with visible enamel breakdown, which could be observed by white spot lesions and brown carious discoloration. Missing teeth were scored when elements were missing solely because of caries, which was verified on dental panoramic radiographs taken at the age of nine. Fillings were scored when teeth were restored because of caries. 2.2 Poverty Questionnaires were used to collect data regarding household income. A multiple‐choice question asked parents to indicate the net household income category at six time points (during pregnancy and at the child ages of 2, 3, 6, 9 and 13 years). Net household income included monthly income from work, benefits, and/or income from assets that respondents received in‐hand following the deduction of tax and other contributions. Parents were also asked about the number of adults and children in the household (i.e., the number of units) living from this income. The mean income to each income category was calculated (e.g., 3200 euros for the category receiving 2800–3600 euros per month), and then, those figures were used to calculate the equivalised disposable income, based on the modified scale from the Organisation for Economic Co‐operation and Development (OECD). Children living in a household with an income below the European poverty threshold—which is less than 60% of the national median (equivalised) disposable income—were assigned as ‘poor’ . As data collection at each phase took place over several years, the median year of the years included in each phase was used. Two missing measurements of poverty were considered as not poor. When data on income were missing more than four time points, children were excluded from the analyses. The following variables were defined on the basis of the poverty status. Poverty at birth and poverty at 2 years old were defined as yes/no variables. Cumulative poverty was defined by the number of episodes of poverty between pregnancy and the child's age of 13 years: no poverty (zero episodes of poverty), intermittent poverty (one–three episodes of poverty), or chronic poverty (four–six episodes of poverty). The poverty trajectories over time were identified using LCGA. This method assigns participants to the trajectory group to which they had the highest probability of belonging based on similar patterns of observed repeated measurement . The lowest Bayesian information criterion (BIC) value was used to select the number of trajectories. LCGA was carried out with two to six classes. Then, a categorical variable including all the trajectories was created as a predictor for assessing dental caries. 2.3 Covariates A number of characteristics were considered as confounders in the analyses such as age, gender and maternal age at enrolment. In addition, maternal educational level was retrieved using questionnaires at the age of 6 years and recategorised as low, middle and high. Children's ethnic background was defined according to the Dutch classification of ethnic background as ‘Dutch’ and ‘non‐Dutch’ if one of the parents was born in another country than the Netherlands . Financial stress was retrieved from questionnaires at the age of 13 years, and it indicated whether the family had experienced worries or tensions in the past 2 years because of financial difficulties. Oral health factors were assessed using questionnaires at the age of 13 years. Sugar intake included two questions about sweet and chocolate frequency consumption and about soft drinks on a weekly basis. For the analyses, sugar intake was categorised as ‘low’ (≤ 2 sugar‐containing items a day) and ‘high’ (≥ 3 sugar‐containing items a day). Toothbrushing frequency was considered as ‘< 2 per day’ and ‘≥ 2 per day’. Dental visits in the last 12 months were assessed with Yes/No. 2.4 Statistical Analysis Descriptive statistics of the study population were presented. Because the DMFT value is zero‐inflated and over‐dispersed, negative binomial hurdle regression (NBHR) models were used to study the association of poverty and dental caries at the age of 13 years. A hurdle model output consists of two parts: a zero‐hurdle part equal to binomial logistic regression that estimates the OR of having caries experience, and a count hurdle part which estimates the contribution of poverty to the amount of caries experience using the rate ratio (RR) of the mean caries counts . Three models were built: the first model included child's gender and age; the second model additionally adjusted for sociodemographic indicators (maternal education level, maternal age at enrolment, children's ethnic background and financial stress); and the third model additionally accounted for oral health factors. Collinearity among determinants was tested and was absent. LCGA was performed in Mplus version 8.6. The statistical analyses were carried out using R version 4.3.2 for Windows (R core team, Vienna, Austria). Multiple imputation was performed in 10 data sets to account for information bias related to missing data using the ‘mice’ package, but the exposure and outcome were not imputed . For all the analyses, the significance level was set at 0.05.
Dental Caries Intraoral photographs were taken to all participants who visited the research centre at the follow‐up phase of children aged 13 years. Children were instructed to brush their teeth for 2 min. A quantitative light fluorescence camera (Qraycam Pro; Inspektor Research Systems BV) was used to capture children's dentition in at least five white light and blue light photographs. The intraoral photographs were scored for dental caries by two trained researchers. Ten per cent of the participants were selected at random and evaluated double to calculate the intra‐rater reliability (weighted kappa = 0.94) and inter‐observer reliability (weighted kappa = 0.84) and both exhibited high agreement. The reliability of the quantitative light fluorescence camera for the assessment of the decayed, missing and filled teeth (DMFT) index was evaluated, and it showed good sensitivity and high specificity compared with the clinical visual tactile inspection . Dental caries in the permanent dentition was assessed at the age of 13 using the DMFT index . Decayed teeth were scored from caries with visible enamel breakdown, which could be observed by white spot lesions and brown carious discoloration. Missing teeth were scored when elements were missing solely because of caries, which was verified on dental panoramic radiographs taken at the age of nine. Fillings were scored when teeth were restored because of caries.
Poverty Questionnaires were used to collect data regarding household income. A multiple‐choice question asked parents to indicate the net household income category at six time points (during pregnancy and at the child ages of 2, 3, 6, 9 and 13 years). Net household income included monthly income from work, benefits, and/or income from assets that respondents received in‐hand following the deduction of tax and other contributions. Parents were also asked about the number of adults and children in the household (i.e., the number of units) living from this income. The mean income to each income category was calculated (e.g., 3200 euros for the category receiving 2800–3600 euros per month), and then, those figures were used to calculate the equivalised disposable income, based on the modified scale from the Organisation for Economic Co‐operation and Development (OECD). Children living in a household with an income below the European poverty threshold—which is less than 60% of the national median (equivalised) disposable income—were assigned as ‘poor’ . As data collection at each phase took place over several years, the median year of the years included in each phase was used. Two missing measurements of poverty were considered as not poor. When data on income were missing more than four time points, children were excluded from the analyses. The following variables were defined on the basis of the poverty status. Poverty at birth and poverty at 2 years old were defined as yes/no variables. Cumulative poverty was defined by the number of episodes of poverty between pregnancy and the child's age of 13 years: no poverty (zero episodes of poverty), intermittent poverty (one–three episodes of poverty), or chronic poverty (four–six episodes of poverty). The poverty trajectories over time were identified using LCGA. This method assigns participants to the trajectory group to which they had the highest probability of belonging based on similar patterns of observed repeated measurement . The lowest Bayesian information criterion (BIC) value was used to select the number of trajectories. LCGA was carried out with two to six classes. Then, a categorical variable including all the trajectories was created as a predictor for assessing dental caries.
Covariates A number of characteristics were considered as confounders in the analyses such as age, gender and maternal age at enrolment. In addition, maternal educational level was retrieved using questionnaires at the age of 6 years and recategorised as low, middle and high. Children's ethnic background was defined according to the Dutch classification of ethnic background as ‘Dutch’ and ‘non‐Dutch’ if one of the parents was born in another country than the Netherlands . Financial stress was retrieved from questionnaires at the age of 13 years, and it indicated whether the family had experienced worries or tensions in the past 2 years because of financial difficulties. Oral health factors were assessed using questionnaires at the age of 13 years. Sugar intake included two questions about sweet and chocolate frequency consumption and about soft drinks on a weekly basis. For the analyses, sugar intake was categorised as ‘low’ (≤ 2 sugar‐containing items a day) and ‘high’ (≥ 3 sugar‐containing items a day). Toothbrushing frequency was considered as ‘< 2 per day’ and ‘≥ 2 per day’. Dental visits in the last 12 months were assessed with Yes/No.
Statistical Analysis Descriptive statistics of the study population were presented. Because the DMFT value is zero‐inflated and over‐dispersed, negative binomial hurdle regression (NBHR) models were used to study the association of poverty and dental caries at the age of 13 years. A hurdle model output consists of two parts: a zero‐hurdle part equal to binomial logistic regression that estimates the OR of having caries experience, and a count hurdle part which estimates the contribution of poverty to the amount of caries experience using the rate ratio (RR) of the mean caries counts . Three models were built: the first model included child's gender and age; the second model additionally adjusted for sociodemographic indicators (maternal education level, maternal age at enrolment, children's ethnic background and financial stress); and the third model additionally accounted for oral health factors. Collinearity among determinants was tested and was absent. LCGA was performed in Mplus version 8.6. The statistical analyses were carried out using R version 4.3.2 for Windows (R core team, Vienna, Austria). Multiple imputation was performed in 10 data sets to account for information bias related to missing data using the ‘mice’ package, but the exposure and outcome were not imputed . For all the analyses, the significance level was set at 0.05.
Results This study analysed 2653 children most of whom had Dutch ethnic background (78.2%) with highly educated mothers. The distribution of poverty showed that 9.4% of the study population was born into poverty. Up to the age of 13 years, 3.9% of the children had experienced four or more episodes of poverty. Regarding oral health factors, 33.4% of the adolescents had dental caries, and most of the participants had an adequate oral health care (Table ). Table shows that after adjustment of potential confounders, poverty at birth was significantly associated with dental caries at the age of 13 years (OR 1.41, 95% CI 1.01–1.99). Poverty at birth also significantly increased the mean number of teeth affected by dental caries (RR 1.34, 95% CI 1.02–1.76) compared with participants who were not born into poverty. On the contrary, poverty at 2 years old was not significantly associated with dental caries. Intermittent poverty was associated with any dental caries after correction for sociodemographic factors (OR 1.36, 95% CI 1.01–1.83) and with the mean number of decayed teeth (RR 1.34, 95% CI 1.05–1.71). In contrast, chronic poverty was not associated with dental caries at 13 compared with children who were never poor. A four‐trajectory model showed the best fit for the data (Table ). Based on the probability of poverty, the following four trajectories were identified: ‘stable absent’ 75.4% (i.e., participants who remain out of poverty), ‘stable low’ 14.8% (i.e., those who continue in a low probability), ‘upward mobility’ 4% (i.e., participants with a high probability of poverty but after a steady decline ended up with a low probability) and ‘downward mobility’ 5.9% (i.e., those whose likelihood of poverty increased steadily over time). No trajectories characterising a constant probability of poverty (always poor) were identified (Figure ). Table shows a significant association between poverty trajectories and dental caries at the age of 13 after fully adjustment. Compared with the ‘stable absent’ trajectory, all the other trajectories had either higher odds of dental caries (‘upward mobility’ OR 1.62, 95% CI 1.03–2.56) or higher mean number of decayed teeth (‘stable low’ RR 1.66, 95% CI 1.24–2.24). Furthermore, ‘downward mobility’ trajectory was significantly associated with both, dental caries and mean number of decayed teeth (OR 1.55, 95% CI 1.05–2.29; RR 1.58 95% CI 1.18–2.12).
Discussion Consistent with lifecourse theories, this study found that dental caries in young adolescents is associated with poverty during the time of birth and with intermittent poverty from birth up to the age of 13 years. Furthermore, downward mobility in poverty was associated with higher odds of dental caries and an increased mean number of decayed teeth. This prospective cohort study analyses poverty status through several repeated measurements over 13 years and dental caries in young adolescents. Extensive existing literatures underline the importance of the first 1000 days of life . Therefore, it was expected that poverty at birth could be a risk factor associated with caries development later in life. A birth cohort study assessed the relationship between poverty at birth and caries status, with findings consistent with the present results. Peres et al. reported a significant positive association between poverty at birth and the number of unsound teeth in adults. Other longitudinal studies evaluated the importance of other ‘critical periods’ in the prevalence dental caries and diverse oral health outcomes, but findings were mixed . For instance, four studies found that SES measured during adulthood may have a stronger relationship with adult oral health status than SES measured during childhood or adolescence . Findings from another longitudinal study suggest that the strong association between SES in early life and later oral health outcomes, seems to be indirect effects. An early created socioeconomic gap acts as a chain of risks throughout the lifecourse . Thus, although this study could not analyse the effect from early life on dental caries in adulthood, the results indicate the importance of early influences on later oral health that should not be neglected. Although the best predictor for dental caries is past caries experience , research suggests that children who experienced early disadvantage, but later see an improvement in their SES, may have lower risk of caries in their permanent teeth . Likewise, other studies using different oral health outcomes found the same positive effect of upward social mobility, which suggests that proximal time points in the trajectories may have a more important effect on young or adult oral health . Findings from this research are in agreement with those, indicating that an upward trajectory did not completely attenuate the negative effects of deprivation during early childhood or adolescence on adult dental health . In this study, participants with a favourable change in their household income were more likely to have dental caries than those who remain in the stable absent trajectory. Unhealthy oral health behaviours such as a lack of toothbrushing, dental check‐ups and a high intake of free sugars are developed and established during early years. These may be carried through into adolescence, despite an improvement in a family's SES . This study found that downward mobility in poverty was associated with dental caries, and it increased the mean number of decayed teeth at 13 years of age, which is in line with previous studies . Downward mobility may reduce the likelihood of a child attending a dental service and therefore may not receive treatment. Despite children's dental treatment in the Netherlands is reimbursed by the health insurance, deprived families may not be aware of children's oral health needs or they may prioritise other family issues. In addition, it has been found in the literature that psychological factors could influence oral health . In contrast with previous research , the findings of the present study showed that there was not a graded relationship between the number of episodes of poverty and dental caries. Poverty measurement in six different time points over 13 years was included, whereas most of the studies included only two or three time points. However, intermittent poverty was associated with decayed teeth. This can be explained by the fact that along the six time points assessed, some time periods in the lifecourse are potentially more crucial than others . Furthermore, the analysis showed that the chronically poor were not significantly affected by dental caries. This may be down to a lack of power, as the number of children in chronic poverty was low. In addition, existing literature showed that, in younger generations, with good oral health, there was no evidence of a gradient when inequalities were estimated . Along with this, it may be possible that the most vulnerable may have received additional support from institutions which could diminish the socioeconomic difference found in this study. Regarding the strengths of this study, household income was used as a measure of poverty which is an accurate indicator . Poverty was investigated in six time points in 13‐year‐old‐adolescents; therefore, it likely represents the whole set of socioeconomic conditions that adolescents experienced across their lifespan. Due to the extensive data collection within this cohort, the authors were able to adjust the analysis for several potential confounders considered in the previous literature. However, residual confounding remains an issue. In terms of limitations in this study, it is acknowledged that dental caries was assessed using intraoral photographs which may be more difficult to diagnose between certain stages of caries development when compared to clinical assessment, which may result in an underestimation of the condition. Finally, these findings must be considered with caution and cannot be generalised to all populations . Differences in terms of public health policies related to access to dental services, legislation about products high in free sugars and water fluoridation availability may influence the strength of the association between tooth decay and poverty. For future research, it is recommended to examine different critical periods in the lifecourse of the study population and relate these with the socioeconomic trajectories. Efforts should be made to analyse and report findings following the recommendations of the Oral Health‐Related Birth Cohort Studies Consortium .
Conclusions This study found an association between experiencing poverty at a critical period early in life such as birth, intermittent poverty from birth until the age of 13 years and dental caries at 13 years. Moreover, downward mobility was also associated with decayed teeth at the age of 13 years. Lifecourse models influence dental caries across childhood and adolescence, and it is important to monitor vulnerable population and develop strategies targeted on deprived children from their early years onward.
The authors declare no conflicts of interest.
Table S1.
|
A size-shrinkable matrix metallopeptidase-2-sensitive delivery nanosystem improves the penetration of human programmed death-ligand 1 siRNA into lung-tumor spheroids | 2506ed4b-2ae4-4754-85ee-eee41a3351a4 | 8183518 | Pharmacology[mh] | Human programmed death-ligand 1 (PD-L1; B7H1) overexpression in tumor cells reduces recognition by T cells and promotes both tumorigenesis and invasion (Teo et al., ; Zou et al., ; Schalper et al., ). PD-L1 plays an immunosuppressive role by binding to PD-1 in T cells, with blockage of this interaction capable of effectively reversing T cell inhibition and avoiding immunosuppression in the tumor microenvironment (Benson et al., ; Yu et al., ). The United States Food and Drug Administration has approved several PD-L1 antibody drugs for Parkinson’s disease, including altizolumab, avilumab, and duvarumab (Chen et al., ), for clinical use. For patients with lung cancer, immunotherapy involving blockage of the PD-1–PD-L1 interaction has been demonstrated as first-line therapy (Suresh et al., ). Recently, progress in nanotechnology had increased interest in potential applications with small-interfering (si)RNA technology. In 2004, phase I clinical trials of an siRNA-based treatment for a type of eye disease were conducted (Whelan, ; Castanotto and Rossi, ), and the first targeted nanoparticle-delivery system for solid tumor patients entered clinical trials, marking the beginning of systematic application of siRNAs in solid tumours (Davis et al., ). Additionally, applications of siRNA–PD-L1 demonstrated positive progress in preclinical studies, where PD-L1–siRNA lipid-nanoparticle therapy increased the proliferation of natural killer cells and antigen-specific CD8+ T cells to enhance their killing and memory functions (Gato-Cañas et al., ). Moreover, previous studies showed that PD-L1 is involved in intracellular anti-apoptotic signals and affects the proliferation, apoptosis, and migration of tumor cells (Clark et al., ) and can transmit anti-apoptotic signals to tumor cells, thereby helping them avoid interferon-induced cell death (Li et al., ; Dong et al., ). Therefore, reexamining the downregulation of PD-L1 expression is crucial. Polyethyleneimine (PEI) is widely used as a hydrophilic and positively charged polymer material for gene therapy. The main chain of the polymer can interact with the anionic phosphate of the siRNA through electrostatic interactions to form nanometer complexes with sizes ranging from 100 nm to 1000 nm (Günther et al., ). Moreover, degradation of siRNA by enzymes can be prevented after complexation to promote cellular uptake through endocytosis (Varkouhi et al., ). The disadvantage of PEI is its cytotoxicity and non-biodegradability, despite its high level of transfection efficiency (Liu et al., ). However, functionalization with polyethylene glycol (PEG) and hyaluronic acid (HA) can reduce PEI toxicity. HA is a nontoxic, non-immunogenic, negatively charged natural compound produced by the human body and that is also rich in carboxyl groups. The combination of polyethylene imide and HA promotes electrostatic neutralization of the nanoparticles. Additionally, HA contributes to the formation of protective hydrophilic surfaces, indicating that PEG and HA coupling can facilitate the passive targeting efficacy of nanomedicines through an enhanced permeability and retention effect, whereas nanoparticle permeability can be hindered by relatively large particle size (Cabral et al., ). Strategies based on the tumor-microenvironmental response are considered ‘intelligent’ and have achieved favorable results in drug delivery (Zhang et al., ). The overexpression of matrix metalloproteinases (MMPs) is more significant than other anomalous features in tumors and can be used as a ‘smart’ drug-delivery and tumor-targeting compound. Furthermore, MMP systems can accurately regulate the release of drugs at different levels. Among MMP substrates, synthetic variants mainly comprise short linear peptides that are superior to natural micromolecular proteins and can directly couple with nanoparticles (Turk et al., ; Tu and Zhu, ). Members of the MMP family, including MMP-2 and MMP-9, are overexpressed in many cancer types (Egeblad and Werb, ) and can promote the destruction of extracellular matrix and play a critical role in tumor invasion and metastasis. Therefore, the development of MMP-2-sensitive tumor-imaging probes and delivery systems has previously been undertaken for cancer-specific therapeutics (Bremer et al., ; Ruan et al., ; Han et al., ). Here, we synthesized a tumor microenvironment-sensitive delivery polymer by conjugating HA to cationic PEI along with an MMP-2-sensitive peptide (P; GPLGLAGC) (Han et al., ) linker (HA-P-PEI) to deliver the PD-L1–siRNA into H1975 cells. Additionally, we synthesized a linker-less HA-PEI variant to allow evaluation of both nanocarriers prepared as spheroids with the same particle size and a uniform distribution following complexation with PD-L1–siRNA.
Materials Sodium HA (20 kDa) was obtained from Lifecore Biomedical Inc. (Chaska, MN, USA), N-(2-aminoethyl) maleimide hydrochloride (AEM), and branched PEI (25 kDa) was obtained from Sigma-Aldrich (St. Louis, MO, USA). The oligopeptide GPLGLAGC(PLG) was synthesized by Shanghai Sangon Biotech Co., Ltd. (Shanghai, China), and negative control (NC) siRNA, Cy3-siRNA (Cy3-conjugated NC siRNA at the 5′-end), fluorescein amidite (FAM)-siRNA (FAM-conjugated NC siRNA at the 5′-end), and PD-L1–siRNA (Sense: UUCUCCGAACGUGUCACGUTT; and antisense: ACGUGACACGUUCGGAGAATT) (Liu, Cao, et al., ) were synthesized by Shanghai GenePharma Co., Ltd. (Shanghai, China). 1-Ethyl-3-(3-dimethylaminopropyl) carbodiimide (EDC) and 1-hydroxybenzotriazole (HOBt) were obtained from Sichuan Shuyan Biotechnology Co., Ltd. (Sichuan, China), and trypsin were purchased from Hyclone (Provo, UT, USA). MMP-2 was purchased from Sigma-Aldrich, and phycoerythrin (PE)-conjugated anti-CD274 (PD-L1; B7-H1) and allophycocyanin (APC)-conjugated anti-CD44 were purchased from eBioscience (San Diego, CA, USA). HA-PEI and HA-P-PEI synthesis For HA-PEI synthesis (Jiang et al., ), 20 mg HA and 402 mg PEI were separately dissolved in pure water (10 mL), followed by mixing of the solutions at pH 6.5. EDC (40 mg) and HOBT (28 mg) were dissolved in a pure water/DMSO (500 µL/500 µL) solution, which was slowly added to the prepared HA/PEI solution. The reaction mixtures were stirred at room temperature for 24 h with the pH adjusted to 7.0. HA-PEI conjugates were then purified by dialysis (30 kDa) against 100 mM NaCl for 2 days, against 25% ethanol for 1 day, and then against distilled water for 1 day, followed by freeze-drying. For HA-P-PEI synthesis, HA-AEM was synthesized using the same method used for HA-PEI, whereas PEI–oligopeptide was synthesized, as follows: PEI and GPLGLAGC were dissolved in distilled water, followed by the addition of EDC and N-hydroxysuccinimide to the peptide solution and mixing with the PEI solution (pH 6.0) with gentle stirring at 25 °C for 4 h. The product solution was dialyzed (35 kDa) against distilled water, lyophilized, and stored at −20 °C. HA-AEM (20 mg) and PEI-PLG (400 mg) were then dissolved in 0.2 M phosphate-buffered saline (PBS; pH 7.4), stirred for 4 h at 25 °C, dialyzed for 48 h, and then lyophilized. Fourier-transform infrared spectroscopy and 1 H nuclear magnetic resonance (NMR) experiments were conducted to confirm the composition of the materials. Micelle preparations and characterization siRNA micellar complexes were prepared by gently blending 10 μL of the siRNA solution (20 μM in diethylpyrocarbonate water) with 90 μL of the polymer solution (0.1 μg/μL in diethylpyrocarbonate water), followed by incubation at 25 °C for 30 min. Samples of polymer/siRNA complexes were loaded and electrophoresed on 1.0% agarose gels containing ethidium bromide (1 μg/mL) at 120 V for 15 min in a Tris-borate-EDTA buffer solution. Particle size and surface zeta potential of the siRNA-condensed micellar complexes were measured using dynamic light scattering (DLS; Nano-ZS90; Malvern Instruments, Malvern, UK) at 25 °C after dilution of the micelles with distilled water. Transmission electron microscopy (TEM; JEM-2100 Plus; JEOL, Tokyo, Japan) was used to observe the size and morphology of the nanoparticles, in which the samples were negatively stained with sodium phosphotungstate. To evaluate the MMP-2 sensitivity of the HA-P-PEI/siRNA nanoparticles, 0.1 mL of the nanoparticles (0.1 μg/mL) and 0.1 mL of the MMP-2 solution in HEPES buffer (0.6 μg/mL, pH 7.4) were incubated at 37 °C. DLS was performed at 0 h, 2 h, 6 h, 16 h, 24 h, and 48 h after incubation, and TEM was performed after 6 h of incubation. Serum stability of the siRNA-loaded nanoparticles The serum stability of the complexes was determined by incubating with serum solution. Briefly, the PEI/siRNA, HA-PEI/siRNA, and HA-P-PEI/siRNA complexes were prepared at N/P = 24:1 (N/P ratio: the ratios of moles of the amine groups of cationic polymers to those of the phosphate ones of RNA), and then incubated with fetal bovine serum (FBS; 1:1 v/v) at 37 °C. A total of 1 μL of heparin solution (12 kDa,12500 IU; Tianjin Biochem Pharmaceutical Co., Ltd., Tianjin, China) was added to de-complex the siRNA from the polymer after 0 h, 1 h, 3 h, 6 h, 8 h, and 24 h of incubation, at which time the samples were visualized by gel electrophoresis, as described. Cell culture and detection of CD44, PD-L1, and MMP-2 levels in NCI-H1975 cells The H1975 cell line was obtained from the Cell Bank of Chinese Academy of Sciences ( https://www.cellbank.org.cn/xibaoximulu.php ) and grown in Roswell Park Memorial Institute (RPMI) 1640 media (Hyclone, Logan, UT, USA) with 10% FBS at 37 °C in a 5% CO 2 atmosphere. The cells were then digested with trypsin (0.25%), collected, washed with cold PBS, and stained with the CD44-APC and PD-L1-PE monoclonal antibodies for 20 min at room temperature. PBS (500 μL) was then added and detected using a flow cytometer (Invitrogen, Carlsbad, CA, USA). Western blot was used to determine MMP-2 levels in H1975 cells. Cell viability assay To test polymer cytotoxicity against the cells, a 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide (MTT) assay was conducted. Briefly, H1975 cells under favorable growth conditions were seeded into 96-well plates (4000 cells/well). After 24 h of adherent growth, cells were incubated with different concentrations of PEI, HA-PEI, and HA-P-PEI solutions for 48 h, after which 20 μL of MTT (5 mg/mL) was added to each well and incubated for 3 h. MTT formazan precipitate was dissolved in 100 μL DMSO, and the absorbance was measured at 570 nm using a microtiter plate luminometer (ReadMax 1200; Flash, Shanghai, China). Cellular uptake of complexes in vitro Flow cytometry (Invitrogen) and fluorescence microscopy (Eclipse Ts2R; Nikon, Tokyo, Japan) analyses were used to evaluate cellular uptake of the polymer/siRNA complexes. For flow cytometry, FAM-labelled siRNA was loaded into the micelles, and 12 × 10 4 H1975 cells/well were inoculated into 12-well plates and incubated in RPMI 1640 medium containing 10% FBS until 70–80% confluence. The medium was then replaced with fresh medium, and free FAM-siRNA, PEI/FAM-siRNA, HA-PEI/FAM-siRNA, and HA-P-PEI/FAM-siRNA (N/P = 24:1; 100 nM) complexes were added and cultured at 37 °C for 4 h. Untreated cells were used as negative controls. The cells were then collected after trypsinization and washed three times with cold PBS, followed by measurement of fluorescence intensity. The cells were then resuspended in 500 μL PBS and analyzed using flow cytometry (Invitrogen). Cellular uptake of the complexes was confirmed by microscopy analyses using Cy3-labeled siRNA and the same transfection procedure. After incubating with free Cy3-siRNA, PEI/Cy3-siRNA, HA-PEI/Cy3-siRNA, and HA-P-PEI/Cy3-siRNA for 4 h, the cells were washed three times with cold PBS and stained with Hoechst 33342 (Abcam, Cambridge, UK) for 20 min and then washed with PBS. Photographs were obtained using a fluorescence microscope (Eclipse Ts2R; Nikon). Tumor-spheroid penetration Tumor spheroids can not only simulate the in vivo environment but also constitute an intuitive and controllable cell culture. H1975 spheroids were produced using the hanging-drop method. Briefly, 1 × 10 5 cells from a single-cell suspension were dispersed in 2 mL of RPMI 1640 medium and 1 mL of the 1.2% methylcellulose mixed solvent, followed by dropping the suspensions onto the cover of a dish. After incubating for 72 h, the spheroids grew to 200 μm and were transferred to flat-bottomed 48-well plates pretreated with 2% agarose. To evaluate the penetration efficacy of the nanoparticles, free Cy3-siRNA, PEI/Cy3-siRNA, HA-PEI/Cy3-siRNA, HA-P-PEI/Cy3-siRNA, and HA-P-PEI/Cy3-siRNA (MMP-2-pretreated) were added, and after a 4-h culture, the solution containing the tumor spheroids was collected and centrifuged (300 rpm). The precipitates were washed three times with a cold PBS and transferred into confocal dishes (Wuxi NEST Biotechnology Co., Ltd., Wuxi, China). Photographs at different penetration depths were obtained using a confocal laser scanning microscope (CLSM880; Carl Zeiss, Oberkochen, Germany), and fluorescence intensity was analyzed using ImageJ software (Schneider et al., ). Gene silencing by the PD-L1 − siRNA complexes in vitro The gene-silencing efficacy of PD-L1–siRNA in NCI-H1975 cells was evaluated by reverse transcriptase-polymerase chain reaction (RT-PCR). H1975 cells (24 × 10 4 cells/well) were inoculated into 6-well plates and incubated at 37 °C for 24 h. The medium was then replaced with a fresh complete medium with 10% FBS (1.8 mL), and both HA-PEI/PD-L1–siRNA and HA-PLG-PEI/PD-L1–siRNA complexes (N/P = 24:1; 100 nM) were added. PBS and NC siRNA were used as controls, and Lipofectamine 3000 was used as a positive control according to manufacturer instructions (Invitrogen). After a 6-h incubation, the medium was replaced with a complete medium. After transfection for 24 h, total mRNA was isolated and reverse transcribed using the Evo M-MLV RT kit (Accurate Biotechnology Co., Ltd., Beijing, China) according to manufacturer instructions. RT-PCR was conducted using a PCR system (Q2000A; Hangzhou LongGene Scientific Instruments Co., Ltd., Hangzhou, China) using SYBR qPCR master mix (Vazyme, Nanjing, China). Primers for glyceraldehyde 3-phosphate dehydrogenase ( GAPDH ) and ( PD-L1 ) were as follows: GAPDH -forward, GGAGCGAGATCCCTCCAAAAT and GAPDH -reverse, GGCTGTTGTCATACTTCTCATGG; and PD-L1 -forward, GCCGAAGTCATCTGGACAAGC and PD-L1 -reverse, GTGTTGATTCTCAGTGTGCTGGTCA. Amplifications were performed over the course of 40 cycles at 95 °C for 10 s and 60 °C for 30 s, with one cycle at 95 °C for 300 s. GAPDH used as an internal reference, and data were normalized prior to statistical analysis. To measure transfection efficacy at the protein level, western blot analysis was conducted after siRNA treatment for 48 h, as described. Cells were lysed with radioimmunoprecipitation assay lysis buffer (Beyotime, Beijing, China), and 10% sodium dodecyl sulfate-polyacrylamide gel electrophoresis was used to separate the proteins (Bio-Rad Laboratories, Richmond, CA, USA). The proteins were transferred onto nitrocellulose membrane and incubated in 5% skim milk for 1 h, followed by incubation of the membranes with anti-PD-L1 (1:1000; Abcam) and anti-calnexin (1:1000; Abcam) overnight at 4 °C. The membranes were then washed three times with PBS containing Tween-20 and incubated with the secondary antibody (anti-rabbit immunoglobulin G; 1:5000; Abways) for 1 h. Statistical analysis Statistical analysis was performed using Prism GraphPad software (v.6.0; GraphPad Software, La Jolla, CA, USA) software and the Student’s t -test. p < 0.05 was considered significant.
Sodium HA (20 kDa) was obtained from Lifecore Biomedical Inc. (Chaska, MN, USA), N-(2-aminoethyl) maleimide hydrochloride (AEM), and branched PEI (25 kDa) was obtained from Sigma-Aldrich (St. Louis, MO, USA). The oligopeptide GPLGLAGC(PLG) was synthesized by Shanghai Sangon Biotech Co., Ltd. (Shanghai, China), and negative control (NC) siRNA, Cy3-siRNA (Cy3-conjugated NC siRNA at the 5′-end), fluorescein amidite (FAM)-siRNA (FAM-conjugated NC siRNA at the 5′-end), and PD-L1–siRNA (Sense: UUCUCCGAACGUGUCACGUTT; and antisense: ACGUGACACGUUCGGAGAATT) (Liu, Cao, et al., ) were synthesized by Shanghai GenePharma Co., Ltd. (Shanghai, China). 1-Ethyl-3-(3-dimethylaminopropyl) carbodiimide (EDC) and 1-hydroxybenzotriazole (HOBt) were obtained from Sichuan Shuyan Biotechnology Co., Ltd. (Sichuan, China), and trypsin were purchased from Hyclone (Provo, UT, USA). MMP-2 was purchased from Sigma-Aldrich, and phycoerythrin (PE)-conjugated anti-CD274 (PD-L1; B7-H1) and allophycocyanin (APC)-conjugated anti-CD44 were purchased from eBioscience (San Diego, CA, USA).
For HA-PEI synthesis (Jiang et al., ), 20 mg HA and 402 mg PEI were separately dissolved in pure water (10 mL), followed by mixing of the solutions at pH 6.5. EDC (40 mg) and HOBT (28 mg) were dissolved in a pure water/DMSO (500 µL/500 µL) solution, which was slowly added to the prepared HA/PEI solution. The reaction mixtures were stirred at room temperature for 24 h with the pH adjusted to 7.0. HA-PEI conjugates were then purified by dialysis (30 kDa) against 100 mM NaCl for 2 days, against 25% ethanol for 1 day, and then against distilled water for 1 day, followed by freeze-drying. For HA-P-PEI synthesis, HA-AEM was synthesized using the same method used for HA-PEI, whereas PEI–oligopeptide was synthesized, as follows: PEI and GPLGLAGC were dissolved in distilled water, followed by the addition of EDC and N-hydroxysuccinimide to the peptide solution and mixing with the PEI solution (pH 6.0) with gentle stirring at 25 °C for 4 h. The product solution was dialyzed (35 kDa) against distilled water, lyophilized, and stored at −20 °C. HA-AEM (20 mg) and PEI-PLG (400 mg) were then dissolved in 0.2 M phosphate-buffered saline (PBS; pH 7.4), stirred for 4 h at 25 °C, dialyzed for 48 h, and then lyophilized. Fourier-transform infrared spectroscopy and 1 H nuclear magnetic resonance (NMR) experiments were conducted to confirm the composition of the materials.
siRNA micellar complexes were prepared by gently blending 10 μL of the siRNA solution (20 μM in diethylpyrocarbonate water) with 90 μL of the polymer solution (0.1 μg/μL in diethylpyrocarbonate water), followed by incubation at 25 °C for 30 min. Samples of polymer/siRNA complexes were loaded and electrophoresed on 1.0% agarose gels containing ethidium bromide (1 μg/mL) at 120 V for 15 min in a Tris-borate-EDTA buffer solution. Particle size and surface zeta potential of the siRNA-condensed micellar complexes were measured using dynamic light scattering (DLS; Nano-ZS90; Malvern Instruments, Malvern, UK) at 25 °C after dilution of the micelles with distilled water. Transmission electron microscopy (TEM; JEM-2100 Plus; JEOL, Tokyo, Japan) was used to observe the size and morphology of the nanoparticles, in which the samples were negatively stained with sodium phosphotungstate. To evaluate the MMP-2 sensitivity of the HA-P-PEI/siRNA nanoparticles, 0.1 mL of the nanoparticles (0.1 μg/mL) and 0.1 mL of the MMP-2 solution in HEPES buffer (0.6 μg/mL, pH 7.4) were incubated at 37 °C. DLS was performed at 0 h, 2 h, 6 h, 16 h, 24 h, and 48 h after incubation, and TEM was performed after 6 h of incubation.
The serum stability of the complexes was determined by incubating with serum solution. Briefly, the PEI/siRNA, HA-PEI/siRNA, and HA-P-PEI/siRNA complexes were prepared at N/P = 24:1 (N/P ratio: the ratios of moles of the amine groups of cationic polymers to those of the phosphate ones of RNA), and then incubated with fetal bovine serum (FBS; 1:1 v/v) at 37 °C. A total of 1 μL of heparin solution (12 kDa,12500 IU; Tianjin Biochem Pharmaceutical Co., Ltd., Tianjin, China) was added to de-complex the siRNA from the polymer after 0 h, 1 h, 3 h, 6 h, 8 h, and 24 h of incubation, at which time the samples were visualized by gel electrophoresis, as described.
The H1975 cell line was obtained from the Cell Bank of Chinese Academy of Sciences ( https://www.cellbank.org.cn/xibaoximulu.php ) and grown in Roswell Park Memorial Institute (RPMI) 1640 media (Hyclone, Logan, UT, USA) with 10% FBS at 37 °C in a 5% CO 2 atmosphere. The cells were then digested with trypsin (0.25%), collected, washed with cold PBS, and stained with the CD44-APC and PD-L1-PE monoclonal antibodies for 20 min at room temperature. PBS (500 μL) was then added and detected using a flow cytometer (Invitrogen, Carlsbad, CA, USA). Western blot was used to determine MMP-2 levels in H1975 cells.
To test polymer cytotoxicity against the cells, a 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide (MTT) assay was conducted. Briefly, H1975 cells under favorable growth conditions were seeded into 96-well plates (4000 cells/well). After 24 h of adherent growth, cells were incubated with different concentrations of PEI, HA-PEI, and HA-P-PEI solutions for 48 h, after which 20 μL of MTT (5 mg/mL) was added to each well and incubated for 3 h. MTT formazan precipitate was dissolved in 100 μL DMSO, and the absorbance was measured at 570 nm using a microtiter plate luminometer (ReadMax 1200; Flash, Shanghai, China).
Flow cytometry (Invitrogen) and fluorescence microscopy (Eclipse Ts2R; Nikon, Tokyo, Japan) analyses were used to evaluate cellular uptake of the polymer/siRNA complexes. For flow cytometry, FAM-labelled siRNA was loaded into the micelles, and 12 × 10 4 H1975 cells/well were inoculated into 12-well plates and incubated in RPMI 1640 medium containing 10% FBS until 70–80% confluence. The medium was then replaced with fresh medium, and free FAM-siRNA, PEI/FAM-siRNA, HA-PEI/FAM-siRNA, and HA-P-PEI/FAM-siRNA (N/P = 24:1; 100 nM) complexes were added and cultured at 37 °C for 4 h. Untreated cells were used as negative controls. The cells were then collected after trypsinization and washed three times with cold PBS, followed by measurement of fluorescence intensity. The cells were then resuspended in 500 μL PBS and analyzed using flow cytometry (Invitrogen). Cellular uptake of the complexes was confirmed by microscopy analyses using Cy3-labeled siRNA and the same transfection procedure. After incubating with free Cy3-siRNA, PEI/Cy3-siRNA, HA-PEI/Cy3-siRNA, and HA-P-PEI/Cy3-siRNA for 4 h, the cells were washed three times with cold PBS and stained with Hoechst 33342 (Abcam, Cambridge, UK) for 20 min and then washed with PBS. Photographs were obtained using a fluorescence microscope (Eclipse Ts2R; Nikon).
Tumor spheroids can not only simulate the in vivo environment but also constitute an intuitive and controllable cell culture. H1975 spheroids were produced using the hanging-drop method. Briefly, 1 × 10 5 cells from a single-cell suspension were dispersed in 2 mL of RPMI 1640 medium and 1 mL of the 1.2% methylcellulose mixed solvent, followed by dropping the suspensions onto the cover of a dish. After incubating for 72 h, the spheroids grew to 200 μm and were transferred to flat-bottomed 48-well plates pretreated with 2% agarose. To evaluate the penetration efficacy of the nanoparticles, free Cy3-siRNA, PEI/Cy3-siRNA, HA-PEI/Cy3-siRNA, HA-P-PEI/Cy3-siRNA, and HA-P-PEI/Cy3-siRNA (MMP-2-pretreated) were added, and after a 4-h culture, the solution containing the tumor spheroids was collected and centrifuged (300 rpm). The precipitates were washed three times with a cold PBS and transferred into confocal dishes (Wuxi NEST Biotechnology Co., Ltd., Wuxi, China). Photographs at different penetration depths were obtained using a confocal laser scanning microscope (CLSM880; Carl Zeiss, Oberkochen, Germany), and fluorescence intensity was analyzed using ImageJ software (Schneider et al., ).
The gene-silencing efficacy of PD-L1–siRNA in NCI-H1975 cells was evaluated by reverse transcriptase-polymerase chain reaction (RT-PCR). H1975 cells (24 × 10 4 cells/well) were inoculated into 6-well plates and incubated at 37 °C for 24 h. The medium was then replaced with a fresh complete medium with 10% FBS (1.8 mL), and both HA-PEI/PD-L1–siRNA and HA-PLG-PEI/PD-L1–siRNA complexes (N/P = 24:1; 100 nM) were added. PBS and NC siRNA were used as controls, and Lipofectamine 3000 was used as a positive control according to manufacturer instructions (Invitrogen). After a 6-h incubation, the medium was replaced with a complete medium. After transfection for 24 h, total mRNA was isolated and reverse transcribed using the Evo M-MLV RT kit (Accurate Biotechnology Co., Ltd., Beijing, China) according to manufacturer instructions. RT-PCR was conducted using a PCR system (Q2000A; Hangzhou LongGene Scientific Instruments Co., Ltd., Hangzhou, China) using SYBR qPCR master mix (Vazyme, Nanjing, China). Primers for glyceraldehyde 3-phosphate dehydrogenase ( GAPDH ) and ( PD-L1 ) were as follows: GAPDH -forward, GGAGCGAGATCCCTCCAAAAT and GAPDH -reverse, GGCTGTTGTCATACTTCTCATGG; and PD-L1 -forward, GCCGAAGTCATCTGGACAAGC and PD-L1 -reverse, GTGTTGATTCTCAGTGTGCTGGTCA. Amplifications were performed over the course of 40 cycles at 95 °C for 10 s and 60 °C for 30 s, with one cycle at 95 °C for 300 s. GAPDH used as an internal reference, and data were normalized prior to statistical analysis. To measure transfection efficacy at the protein level, western blot analysis was conducted after siRNA treatment for 48 h, as described. Cells were lysed with radioimmunoprecipitation assay lysis buffer (Beyotime, Beijing, China), and 10% sodium dodecyl sulfate-polyacrylamide gel electrophoresis was used to separate the proteins (Bio-Rad Laboratories, Richmond, CA, USA). The proteins were transferred onto nitrocellulose membrane and incubated in 5% skim milk for 1 h, followed by incubation of the membranes with anti-PD-L1 (1:1000; Abcam) and anti-calnexin (1:1000; Abcam) overnight at 4 °C. The membranes were then washed three times with PBS containing Tween-20 and incubated with the secondary antibody (anti-rabbit immunoglobulin G; 1:5000; Abways) for 1 h.
Statistical analysis was performed using Prism GraphPad software (v.6.0; GraphPad Software, La Jolla, CA, USA) software and the Student’s t -test. p < 0.05 was considered significant.
Synthesis and characterization of HA-PEI and HA-PLG-PEI As shown in , the HA-PEI conjugate was synthesized by attaching the amino group of PEI (25 kDa) to the carboxyl group of HA (20 kDa) using EDC/HOBt as the catalyst. 1 H NMR results demonstrated the successful synthesis of HA-PEI according to the methyl peak of HA at 1.9 and the representative peaks of PEI from 2.5 to 3.2. The calculated weight ratio of HA to PEI was 1:20, and the HA-P-PEI conjugate comprised a 25-kDa PEI backbone, a Gly-PLGLAG-Cys linker, and HA-AEM. As shown in , HA-P-PEI was synthesized using a three-step reaction. First, HA-AEM was synthesized by attaching the amino group of AEM to the carboxyl group of HA using EDC/HOBt as the catalyst. Signal peaks between 2.6 and 3.2 of AEM are shown in . PEI-PLG was then prepared by linking the carboxyl group of the peptide to the amino group of PEI, with the signal peaks of PLG ( δ 0.6–2.2 and 3.2–4.4) are shown in . Finally, HA-P-PEI was obtained by an addition reaction through the thiol group on PEI-PLG and the double bond on HA-AEM. 1 H NMR results showed the characteristic signals of HA ( δ 1.9), PLG ( δ 0.6–2.2, 3.2–4.4), and PEI ( δ 2.4–3.2), indicating successful synthesis of HA-PLG-PEI. The calculated weight ratio of HA to PEI was 1:11. Moreover, the infrared spectrum (Figure S1) showed representative absorption peaks of the hydroxyl groups of HA at 3380 nm for both HA-PEI and HA-P-PEI. Additionally, we observed stretching vibration peaks of NH in PEI between 3100 and 3300 nm, with the signal becoming weaker following conjugation of HA with PEI. Characterization of the HA-P-PEI/siRNA complexes The ability of the polymers to encapsulate the siRNA was assessed by agarose gel electrophoresis , where siRNA migration was completely retarded at N:P ratios >4 in the PEI/siRNA complexes (the ratios were ∼8 for the HA-PEI/siRNA and HA-P-PEI/siRNA complexes). These results suggested that the nanocomplex successfully encapsulated siRNA and formed a stable HA-P-PEI/siRNA nanocomplex, with no siRNA bands observed by gel electrophoresis. We then evaluated the dispersion and morphology of the HA-PEI/siRNA and HA-P-PEI/siRNA complexes by TEM . The results suggested that most of the polyplexes were nearly spherical, indicating that the HA-PEI and HA-P-PEI polymers could interact with siRNA through electrostatic attraction and condense the siRNA to form nanocomplexes. The size and zeta potential of the complexes were measured by DLS , revealing that the sizes of the HA-PEI/siRNA and HA-P-PEI/siRNA nanoparticles (N/P = 24:1) were ∼200 nm. After exposure of HA-P-PEI/siRNA to MMP-2 for 6 h, TEM analysis showed that the original shape of the particles was fractured, resulting in large and small particles. Additionally, DLS results showed that some of the nanoparticles turned into very small particles (<10 nm), with the number of small particles (<100 nm) significantly increasing from 7.1 to 25.6% ( p < .05) following exposure to MMP-2 (Figure S2). This phenomenon is consistent with a previous report demonstrating that MMP-2 treatment resulted in ∼20% of PEG shells detaching from the surface of particles along with slightly larger sizes (Ke et al., ). Furthermore, zeta-potential measurements showed that the surface charge of HA-PEI/siRNA and HA-P-PEI/siRNA decreased from 37.6 to 16.4 mV and 6.89 mV, respectively, following MMP-2 treatment. We then assessed the stability of the complexes by incubating free siRNA, PEI/siRNA, HA-PEI/siRNA, and HA-P-PEI/siRNA with FBS at 37 °C. Gel-retardation studies to observe siRNA degradation at various time intervals revealed that the polymer/siRNA complexes protected the siRNA for up to 24 h, whereas no siRNA was observed in the free-siRNA samples after 3 h. CD44, PD-L1, and MMP2 expressions in NCI-H1975 cells A previous study reported that HA strongly binds CD44 receptors overexpressed on many cancer cells, which makes them capable of increasing the cellular uptake of the nanoparticles (Dosio et al., ). Additionally, studies revealed that MMPs are overexpressed in various types of tumors, where their sensitive substrates can be used to develop tumor-microenvironment-responsive nanoparticles to achieve better tumor diagnosis or treatment (Olson et al., ; Mansour et al., ). Because the present study aimed to develop an HA-conjugated, MMP-2 sensitive PD-L1–siRNA-delivery system, detection of CD44, MMP-2, and PD-L1 expression was required. We found that both CD44 and PD-L1 were overexpressed in NCI-H1975 cells , whereas western blot confirmed that MMP-2 was also overexpressed in these cells (Figure S3). Cytotoxicity of the HA-PEI and HA-P-PEI polymers To determine whether HA–PEI coupling can reduce PEI cytotoxicity, we performed MTT assays on NCI-H1975 cells. After a 48-h incubation, the viability of NCI-H1975 cells was significantly lower in the presence of PEI alone relative to that with HA-PEI and HA-P-PEI (10 μg/mL each) . Moreover, the rate of proliferative inhibition by PEI was 58.62%, whereas that by HA-PEI was 28.57% ( p < 0.01 vs. PEI), and that by HA-P-PEI was 12.37% ( p < 0.001 vs. PEI). At a higher polymer concentration (20 μg/mL), the inhibition rate by PEI alone reached 81.86%, whereas that for HA-PEI was 77.24% and that for HA-P-PEI was 53.76% ( p < 0.01 vs. PEI). These results indicated that coupling of HA-AEM with PEI-PLG significantly reduced PEI cytotoxicity. A previous study reported that the positively charged PEI induced cell death and caused cytotoxicity both in vitro and in vivo , thereby greatly hindering its clinical application (Shao et al., ); however, the mechanism of PEI toxicity remains poorly understood. Several PEI-interacting proteins, including heat-shock proteins, glutathione-S-transferases, and protein disulfide isomerases (involved in apoptosis), have been identified in PEI-specific toxicity pathways (Khansarizadeh et al., ). Coupling PEI with negatively charged HA promotes electrostatic neutralization of PEI, and as the HA: PEI molar ratio increased, zeta-potential values decreased due to the addition of HA (Kim et al., ). Moreover, we found that the weight ratio of PEI in HA-P-PEI (91.67%) was lower than that in HA-PEI (95.24%), resulting in a lower zeta potential for HA-P-PEI/siRNA nanoparticles relative to that for HA-PEI/siRNA at the same N/P and resulting in the decreased toxicity of HA-P-PEI relative to HA-PEI. Cellular uptake of the nanocomplexes We then investigated cellular uptake of the nanocomplexes by fluorescence microscopy and flow cytometry using Cy3-conjugated siRNA (red fluorescence) loaded into different polymers and incubated in NCI-H1975 cells for 4 h at 37 °C. shows that significant fluorescent signals were observed in polymer/Cy3-siRNA-treated cells, whereas red fluorescent signals were mostly distributed in cell nuclei. By contrast, few free Cy3-conjugated siRNAs penetrated the cells. To quantitatively evaluate the delivery efficacy into the cells, flow cytometric studies were conducted, revealing the highest cellular uptake of FAM-siRNA in the PEI/FAM-siRNA group. Similarly, we observed an increase in FAM-siRNA uptake in the HA-P-PEI/FAM-siRNA group relative to the free FAM-siRNA group . Moreover, the fluorescence intensity in HA-PEI/FAM-siRNA-treated cells was weaker than that in the PEI/FAM-siRNA-treated group and stronger than that in the HA-P-PEI/FAM-siRNA-treated group. This might be attributed to the decreased zeta potential as a result of PEI coupling with HA, as HA reportedly has a strong negative impact on transfection efficiency (van de Wetering et al., ). Penetration of the nanocomplexes into H1975 tumor spheroids The tumor-spheroid model can precisely reflect the penetration of nanoparticles into deeper regions of solid tumors, making it a more realistic in vivo simulation. The effects of different formulations at different depths in H1975 tumor spheroids are shown in . Free siRNA and PEI/siRNA showed increased penetration depths of 30 µm and 60 μm, which might be attributed to the small size of the free siRNA and the positive charge of PEI. Additionally, the fluorescence intensity of HA-PEI/siRNA and HA-P-PEI/siRNA also decreased inside the spheroid, which might have been due to the coupling of negatively charged HA and the relatively large particle size. Moreover, HA-P-PEI/siRNA nanoparticles pretreated with MMP-2 showed increased fluorescence intensity at deeper levels in the spheroids. Further analysis of the effect of MMP-2 sensitivity and the fluorescence of the nanoparticles at different depths showed that after the addition of MMP-2, penetration of the HA-P-PEI/siRNA nanoparticles into the spheroids significantly increased relative to that of the HA-P-PEI/siRNA nanoparticles without MMP-2 treatment . Therefore, these results support the hypothesis that shrinkage of the large particles to smaller particles can facilitate the penetration process. Hyaluronidases that degrade HA have been implicated in tumor progression and metastasis, as well as angiogenesis, with previous studies revealing that hyaluronidases can be used to develop HA-based drug-delivery systems (Liu, Hu, et al., ; Luo et al., ; Yu et al., ). However, in the present study, H1975 tumor penetration showed decreases in the fluorescence intensity of HA-PEI/siRNA inside the spheroid relative to PEI/siRNA, demonstrating that HA detachment via hyaluronidase was insufficient to degrade HA and expose PEI. There are three reasons explaining this phenomenon. First, the molecular weight of HA in this study was only 20 kDa, which is almost the same as degraded human HA fragments. A previous report revealed that hyaluronidase-1 and -2 cleave HA to small (<20 kDa) fragments (Patel et al., ). Second, hyaluronidase expression in H1975 cells is too low to degrade HA. Third, the type of hyaluronidase expressed in H1975 cells is not enzymatically active. We found that penetration of HA-PEI/siRNA nanoparticles into the tumor spheroid was hindered by particle size and the negatively charged HA; therefore, we added an MMP-2-sensitive linker to synthesize HA-P-PEI. The results showed that penetration of the HA-P-PEI/siRNA nanoparticles into the spheroids significantly increased relative to nanoparticles without MMP-2 treatment and HA-PEI/siRNA nanoparticles. This was attributed to the size of the particles (<100 nm), which decreased following exposure to MMP-2. Future studies will be conducted on the expression of hyaluronidase in H1975 cells and the release of HA from HA-PEI/siRNA nanoparticles. RT-PCR and Western blot verification of PD-L1 knockdown We then determined the PD-L1–siRNA effectiveness in NCI-H1975 cells. RT-PCR analysis of NCI-H1975 cells transfected with Lipo3000/PD-L1–siRNA (positive control), PEI/PD-L1–siRNA, HA-PEI/PD-L1–siRNA, HA-P-PEI/PD-L1–siRNA, and Lipo3000/NC siRNA for 24 h revealed decreased levels of PD-L1 mRNA in cells treated with Lipo3000/PD-L1–siRNA, PEI/PD-L1–siRNA, HA-PEI/PD-L1–siRNA, and HA-P-PEI/PD-L1–siRNA relative to the NC . Specifically, HA-PEI/PD-L1–siRNA transfection decreased PD-L1 mRNA levels by up to 45% relative to the NC group ( p < 0.05). Western blot analysis confirmed these findings , with the HA-PEI/PD-L1–siRNA-treated group also showing clear downregulation of PD-L1 to levels similar to those in PEI/PD-L1–siRNA- and the Lipo3000/PD-L1–siRNA-treated cells along with observation of high serum stability of the nanoparticles treated with 50% FBS for 24 h. Additionally, although HA-PEI/PD-L1–siRNA demonstrated higher efficacy in attenuating PD-L1 expression relative to HA-P-PEI/PD-L1–siRNA, the tumor penetration of HA-PEI was weaker than that of HA-P-PEI (with the treatment of MMP-2). The addition of MMP-2 shrunk the HA-P-PEI/PD-L1–siRNA nanoparticles, resulting in deeper penetration of tumor spheroids. Furthermore, the HA-P-PEI nanoparticles demonstrated decreased cytotoxicity than the HA-PEI nanoparticles, suggesting their potential therapeutic superiority.
As shown in , the HA-PEI conjugate was synthesized by attaching the amino group of PEI (25 kDa) to the carboxyl group of HA (20 kDa) using EDC/HOBt as the catalyst. 1 H NMR results demonstrated the successful synthesis of HA-PEI according to the methyl peak of HA at 1.9 and the representative peaks of PEI from 2.5 to 3.2. The calculated weight ratio of HA to PEI was 1:20, and the HA-P-PEI conjugate comprised a 25-kDa PEI backbone, a Gly-PLGLAG-Cys linker, and HA-AEM. As shown in , HA-P-PEI was synthesized using a three-step reaction. First, HA-AEM was synthesized by attaching the amino group of AEM to the carboxyl group of HA using EDC/HOBt as the catalyst. Signal peaks between 2.6 and 3.2 of AEM are shown in . PEI-PLG was then prepared by linking the carboxyl group of the peptide to the amino group of PEI, with the signal peaks of PLG ( δ 0.6–2.2 and 3.2–4.4) are shown in . Finally, HA-P-PEI was obtained by an addition reaction through the thiol group on PEI-PLG and the double bond on HA-AEM. 1 H NMR results showed the characteristic signals of HA ( δ 1.9), PLG ( δ 0.6–2.2, 3.2–4.4), and PEI ( δ 2.4–3.2), indicating successful synthesis of HA-PLG-PEI. The calculated weight ratio of HA to PEI was 1:11. Moreover, the infrared spectrum (Figure S1) showed representative absorption peaks of the hydroxyl groups of HA at 3380 nm for both HA-PEI and HA-P-PEI. Additionally, we observed stretching vibration peaks of NH in PEI between 3100 and 3300 nm, with the signal becoming weaker following conjugation of HA with PEI.
The ability of the polymers to encapsulate the siRNA was assessed by agarose gel electrophoresis , where siRNA migration was completely retarded at N:P ratios >4 in the PEI/siRNA complexes (the ratios were ∼8 for the HA-PEI/siRNA and HA-P-PEI/siRNA complexes). These results suggested that the nanocomplex successfully encapsulated siRNA and formed a stable HA-P-PEI/siRNA nanocomplex, with no siRNA bands observed by gel electrophoresis. We then evaluated the dispersion and morphology of the HA-PEI/siRNA and HA-P-PEI/siRNA complexes by TEM . The results suggested that most of the polyplexes were nearly spherical, indicating that the HA-PEI and HA-P-PEI polymers could interact with siRNA through electrostatic attraction and condense the siRNA to form nanocomplexes. The size and zeta potential of the complexes were measured by DLS , revealing that the sizes of the HA-PEI/siRNA and HA-P-PEI/siRNA nanoparticles (N/P = 24:1) were ∼200 nm. After exposure of HA-P-PEI/siRNA to MMP-2 for 6 h, TEM analysis showed that the original shape of the particles was fractured, resulting in large and small particles. Additionally, DLS results showed that some of the nanoparticles turned into very small particles (<10 nm), with the number of small particles (<100 nm) significantly increasing from 7.1 to 25.6% ( p < .05) following exposure to MMP-2 (Figure S2). This phenomenon is consistent with a previous report demonstrating that MMP-2 treatment resulted in ∼20% of PEG shells detaching from the surface of particles along with slightly larger sizes (Ke et al., ). Furthermore, zeta-potential measurements showed that the surface charge of HA-PEI/siRNA and HA-P-PEI/siRNA decreased from 37.6 to 16.4 mV and 6.89 mV, respectively, following MMP-2 treatment. We then assessed the stability of the complexes by incubating free siRNA, PEI/siRNA, HA-PEI/siRNA, and HA-P-PEI/siRNA with FBS at 37 °C. Gel-retardation studies to observe siRNA degradation at various time intervals revealed that the polymer/siRNA complexes protected the siRNA for up to 24 h, whereas no siRNA was observed in the free-siRNA samples after 3 h.
A previous study reported that HA strongly binds CD44 receptors overexpressed on many cancer cells, which makes them capable of increasing the cellular uptake of the nanoparticles (Dosio et al., ). Additionally, studies revealed that MMPs are overexpressed in various types of tumors, where their sensitive substrates can be used to develop tumor-microenvironment-responsive nanoparticles to achieve better tumor diagnosis or treatment (Olson et al., ; Mansour et al., ). Because the present study aimed to develop an HA-conjugated, MMP-2 sensitive PD-L1–siRNA-delivery system, detection of CD44, MMP-2, and PD-L1 expression was required. We found that both CD44 and PD-L1 were overexpressed in NCI-H1975 cells , whereas western blot confirmed that MMP-2 was also overexpressed in these cells (Figure S3).
To determine whether HA–PEI coupling can reduce PEI cytotoxicity, we performed MTT assays on NCI-H1975 cells. After a 48-h incubation, the viability of NCI-H1975 cells was significantly lower in the presence of PEI alone relative to that with HA-PEI and HA-P-PEI (10 μg/mL each) . Moreover, the rate of proliferative inhibition by PEI was 58.62%, whereas that by HA-PEI was 28.57% ( p < 0.01 vs. PEI), and that by HA-P-PEI was 12.37% ( p < 0.001 vs. PEI). At a higher polymer concentration (20 μg/mL), the inhibition rate by PEI alone reached 81.86%, whereas that for HA-PEI was 77.24% and that for HA-P-PEI was 53.76% ( p < 0.01 vs. PEI). These results indicated that coupling of HA-AEM with PEI-PLG significantly reduced PEI cytotoxicity. A previous study reported that the positively charged PEI induced cell death and caused cytotoxicity both in vitro and in vivo , thereby greatly hindering its clinical application (Shao et al., ); however, the mechanism of PEI toxicity remains poorly understood. Several PEI-interacting proteins, including heat-shock proteins, glutathione-S-transferases, and protein disulfide isomerases (involved in apoptosis), have been identified in PEI-specific toxicity pathways (Khansarizadeh et al., ). Coupling PEI with negatively charged HA promotes electrostatic neutralization of PEI, and as the HA: PEI molar ratio increased, zeta-potential values decreased due to the addition of HA (Kim et al., ). Moreover, we found that the weight ratio of PEI in HA-P-PEI (91.67%) was lower than that in HA-PEI (95.24%), resulting in a lower zeta potential for HA-P-PEI/siRNA nanoparticles relative to that for HA-PEI/siRNA at the same N/P and resulting in the decreased toxicity of HA-P-PEI relative to HA-PEI.
We then investigated cellular uptake of the nanocomplexes by fluorescence microscopy and flow cytometry using Cy3-conjugated siRNA (red fluorescence) loaded into different polymers and incubated in NCI-H1975 cells for 4 h at 37 °C. shows that significant fluorescent signals were observed in polymer/Cy3-siRNA-treated cells, whereas red fluorescent signals were mostly distributed in cell nuclei. By contrast, few free Cy3-conjugated siRNAs penetrated the cells. To quantitatively evaluate the delivery efficacy into the cells, flow cytometric studies were conducted, revealing the highest cellular uptake of FAM-siRNA in the PEI/FAM-siRNA group. Similarly, we observed an increase in FAM-siRNA uptake in the HA-P-PEI/FAM-siRNA group relative to the free FAM-siRNA group . Moreover, the fluorescence intensity in HA-PEI/FAM-siRNA-treated cells was weaker than that in the PEI/FAM-siRNA-treated group and stronger than that in the HA-P-PEI/FAM-siRNA-treated group. This might be attributed to the decreased zeta potential as a result of PEI coupling with HA, as HA reportedly has a strong negative impact on transfection efficiency (van de Wetering et al., ).
The tumor-spheroid model can precisely reflect the penetration of nanoparticles into deeper regions of solid tumors, making it a more realistic in vivo simulation. The effects of different formulations at different depths in H1975 tumor spheroids are shown in . Free siRNA and PEI/siRNA showed increased penetration depths of 30 µm and 60 μm, which might be attributed to the small size of the free siRNA and the positive charge of PEI. Additionally, the fluorescence intensity of HA-PEI/siRNA and HA-P-PEI/siRNA also decreased inside the spheroid, which might have been due to the coupling of negatively charged HA and the relatively large particle size. Moreover, HA-P-PEI/siRNA nanoparticles pretreated with MMP-2 showed increased fluorescence intensity at deeper levels in the spheroids. Further analysis of the effect of MMP-2 sensitivity and the fluorescence of the nanoparticles at different depths showed that after the addition of MMP-2, penetration of the HA-P-PEI/siRNA nanoparticles into the spheroids significantly increased relative to that of the HA-P-PEI/siRNA nanoparticles without MMP-2 treatment . Therefore, these results support the hypothesis that shrinkage of the large particles to smaller particles can facilitate the penetration process. Hyaluronidases that degrade HA have been implicated in tumor progression and metastasis, as well as angiogenesis, with previous studies revealing that hyaluronidases can be used to develop HA-based drug-delivery systems (Liu, Hu, et al., ; Luo et al., ; Yu et al., ). However, in the present study, H1975 tumor penetration showed decreases in the fluorescence intensity of HA-PEI/siRNA inside the spheroid relative to PEI/siRNA, demonstrating that HA detachment via hyaluronidase was insufficient to degrade HA and expose PEI. There are three reasons explaining this phenomenon. First, the molecular weight of HA in this study was only 20 kDa, which is almost the same as degraded human HA fragments. A previous report revealed that hyaluronidase-1 and -2 cleave HA to small (<20 kDa) fragments (Patel et al., ). Second, hyaluronidase expression in H1975 cells is too low to degrade HA. Third, the type of hyaluronidase expressed in H1975 cells is not enzymatically active. We found that penetration of HA-PEI/siRNA nanoparticles into the tumor spheroid was hindered by particle size and the negatively charged HA; therefore, we added an MMP-2-sensitive linker to synthesize HA-P-PEI. The results showed that penetration of the HA-P-PEI/siRNA nanoparticles into the spheroids significantly increased relative to nanoparticles without MMP-2 treatment and HA-PEI/siRNA nanoparticles. This was attributed to the size of the particles (<100 nm), which decreased following exposure to MMP-2. Future studies will be conducted on the expression of hyaluronidase in H1975 cells and the release of HA from HA-PEI/siRNA nanoparticles.
We then determined the PD-L1–siRNA effectiveness in NCI-H1975 cells. RT-PCR analysis of NCI-H1975 cells transfected with Lipo3000/PD-L1–siRNA (positive control), PEI/PD-L1–siRNA, HA-PEI/PD-L1–siRNA, HA-P-PEI/PD-L1–siRNA, and Lipo3000/NC siRNA for 24 h revealed decreased levels of PD-L1 mRNA in cells treated with Lipo3000/PD-L1–siRNA, PEI/PD-L1–siRNA, HA-PEI/PD-L1–siRNA, and HA-P-PEI/PD-L1–siRNA relative to the NC . Specifically, HA-PEI/PD-L1–siRNA transfection decreased PD-L1 mRNA levels by up to 45% relative to the NC group ( p < 0.05). Western blot analysis confirmed these findings , with the HA-PEI/PD-L1–siRNA-treated group also showing clear downregulation of PD-L1 to levels similar to those in PEI/PD-L1–siRNA- and the Lipo3000/PD-L1–siRNA-treated cells along with observation of high serum stability of the nanoparticles treated with 50% FBS for 24 h. Additionally, although HA-PEI/PD-L1–siRNA demonstrated higher efficacy in attenuating PD-L1 expression relative to HA-P-PEI/PD-L1–siRNA, the tumor penetration of HA-PEI was weaker than that of HA-P-PEI (with the treatment of MMP-2). The addition of MMP-2 shrunk the HA-P-PEI/PD-L1–siRNA nanoparticles, resulting in deeper penetration of tumor spheroids. Furthermore, the HA-P-PEI nanoparticles demonstrated decreased cytotoxicity than the HA-PEI nanoparticles, suggesting their potential therapeutic superiority.
In this study, we developed and evaluated an MMP-2-sensitive siRNA-delivery system. The results showed that the size of the HA-P-PEI/siRNA nanoparticles decreased from 186.4 nm to <10 nm following the addition of the MMP-2 linker. We demonstrated that HA-P-PEI/siRNA could be effectively taken up by H1975 cells, and western blot showed that HA-P-PEI/PD-L1–siRNA successfully downregulated PD-L1 levels in these cells along with high penetrative ability into tumor spheroids. These findings indicated that the HA-P-PEI nanoparticles were superior in their ability to support siRNA penetration into solid tumors, although additional studies are necessary to further enhance transfection efficiency and reduce toxicity.
Supplemental Material Click here for additional data file.
|
An Insight into the Role of Postmortem Immunohistochemistry in the Comprehension of the Inflammatory Pathophysiology of COVID-19 Disease and Vaccine-Related Thrombotic Adverse Events: A Narrative Review | bf0ad51d-7a21-4aa7-92b7-3ac55074a6ad | 8584583 | Anatomy[mh] | Since its first reported outbreak in Wuhan, China, in December 2019, COVID-19 (also known as SARS-CoV-2) has rapidly spread throughout the globe, thus leading the World Health Organization (WHO) to declare a pandemic on 11 March 2020 . The resulting infection mainly affects the lungs, resulting in a broad spectrum of clinical manifestations that range from asymptomatic or mild, flu-like forms to Severe Acute Respiratory Syndrome (SARS) with multiorgan failure [ , , , ], especially in older patients who present comorbidities (e.g., hypertension, obesity, diabetes, chronic obstructive pulmonary disease, coronary artery disease, and chronic kidney disease) . Therefore, the COVID-19 pandemic represents not only a leading cause of death worldwide, but also a burden for the healthcare system both in terms of intensive care unit overload and the management of economic resources [ , , , , ]. The need for a correct clinical management and specific therapeutic strategies has thus led to the production of several research works aiming to shed light on the mechanisms underlying the pathophysiology of COVID-19. In such a context, the role of postmortem investigations is of utmost importance due to the possibility not only to directly analyze the affected organs, but also to achieve a correct diagnosis, thus assessing whether a patient is dead “from” or “with” COVID-19. Despite the publication of recommendations and safety strategies to be adopted in cases of confirmed and/or suspected COVID-19 deaths, several countries have chosen not to perform autopsies, except for selected cases ; as a result, the exact pathophysiological mechanisms of COVID-19 infection have been partly understood [ , , ]. According to current scientific knowledge, SARS-CoV-2 infection starts when the viral Spike glycoprotein (S) binds to the Angiotensin-Converting Enzyme 2 (ACE2), which is highly expressed on epithelial cells of the respiratory tract and endothelial cells; the subsequent fusion of the viral envelope with the host cell membrane activates, at first, an innate immune response mediated by both the inflammasome and NF-κB pathway , thus leading to the release of a great number of pro-inflammatory cytokines (the so-called “cytokine storm”) responsible for the following activation of the adaptive immune response that contributes to the onset of a hyperinflammation state evolving into Acute Respiratory Distress Syndrome (ARDS), prominent hypercoagulability, and multiorgan failure [ , , , ]. Within a postmortal setting, lungs are considered the mostly affected organ with evidence, on gross and histologic examinations, of Diffuse Alveolar Damage (DAD) and microthrombi in the pulmonary vessels . Another important aspect is related to vaccines that represent the most important countermeasure to fight the COVID-19 pandemic. Two vaccines are adenoviral vector-based vaccines (COVID-19 Vaccine Janssen and COVID-19 Vaccine AstraZeneca (Vaxzevria, ChAdOx1-S, respectively)) and one of those, the AstraZeneca vaccine, has been associated with severe vascular reactions sourcing diffidence among the people. The mechanism by which the vaccine caused the vascular complications is still being defined, however, seemingly associated with an inadequate immune response . In these difficult contexts, the postmortem studies are also useful to help the scientific community to better understand both the COVID-19 pathophysiology and the pathological events underlying the vaccine-related vascular reaction. In such a context, additional reliance on ancillary techniques, such as immunohistochemistry [ , , ], allows the identification of specific cells and/or mediators recruited at the inflammation site, thus contributing to a better definition of the pathophysiological events guiding the immune response. Hence, the aim of the present work was to highlight the role of the postmortem investigations in the comprehension of the pathophysiological events underlying COVID-19 infection and vaccine-induced adverse vascular reactions, specifically focusing on the immunohistochemical approach.
According to the research works produced so far, COVID-19 infection mostly affects the lungs, evolving into Severe Acute Respiratory Distress Syndrome (ARDS) in critical cases. The corresponding histological findings are represented by Diffuse Alveolar Damage (DAD) at different stages (mainly exudative and proliferative), characterized by hyaline membranes, intra-alveolar and/or septal and/or interstitial oedema, intra-alveolar fibrinous exudate, inflammatory cell infiltrates, type 2 pneumocyte hyperplasia/activation, and squamous metaplasia [ , , , , , , , , , , , , , , , , , ]. As evidenced in several works, a further help to better define the immune cell infiltrates in the lungs has been provided by the association of immunohistochemical analyses to the postmortem investigations ( ). In their work, Lupariello et al. relied on immunohistochemistry in order to characterize the lymphocytic immune infiltrate in the lungs of two patients with COVID-19, revealing the presence of a diffuse perivascular recruitment of T lymphocytes (CD3 + ) and focal infiltrations of CD8 + T lymphocytes and NK (TIA1 + ) in vessels and perivascular spaces, but not B lymphocytes infiltrates. As well as lymphocytes, macrophages also play a key role in COVID-19-induced response, as highlighted in the works of both Conde et al. and Suess et al. , where the immunohistochemical reactivity for CD68 revealed the presence of intra-alveolar macrophage infiltrates, showing viral cytopathic-like changes, such as vesicular nuclei with prominent nucleoli, together with occasional multinucleated giant cells. The authors also performed immunohistochemistry for cytokeratin AE1/AE3 and TTF-1 , which showed the same cytopathic changes, along with severe hyperplasia and type 2 pneumocytes. Similar results have been obtained by Barton et al. , who carried out immunohistochemical assays for both macrophage and lymphocytic markers on lung samples obtained from two patients positive to COVID-19. In both cases, the CD68 immunoreactivity revealed the presence of alveolar macrophages. The assessment of the immunoreactivity for CD3, CD4, and CD8 revealed the presence of T lymphocytes within the alveolar septa, with CD8 + T cells slightly outnumbering the CD4 + ones; in this case, CD20 + B lymphocytes, although rare, were also detected. In the second case, where acute bronchopneumonia foci along with aspirated food particles were detected, neutrophils and histiocytes were also found in the peribronchiolar airspaces. In contrast with the works mentioned above, the evaluation by Cipolloni et al. of the same markers in order to define the COVID-19-related immune infiltrates in the lungs of two cases revealed CD20 + B lymphocytes (specifically, CD79 + plasma cells) as the major lymphocytic infiltrate in both cases, together with CD68 + macrophages. CD4 + and CD8 + T cells were also detected, predominantly located in the interstitial spaces and around larger bronchioles. The assessment of mostly the same immunohistochemical markers was carried out by Hanley et al. , who summarized the main findings from ten cases. Interstitial CD68 + macrophages were prominent in all cases; mild to moderate lymphocytic infiltrates were also detected in all ten cases, with CD4 + T cells outnumbering CD8 + T cells. Occasionally, CD56 + NK cells and small CD20 + B cells were found. Mild interstitial neutrophilic infiltrates were detected in 3/10 patients. CD68 + macrophages and CD3 + T cells are considered as the main infiltrates within the alveolar spaces in the case discussed by Cîrstea et al. , along with very rare CD20 + B cells. The immunostaining for pan-cytokeratin (CK) AE1/AE3 or the CK7 revealed extensively proliferated, thickened, and detached epithelial cells; in addition, α-SMA + myofibroblasts were mainly detected within the thickened alveolar spaces. Interstitial lympho-monocytic infiltrates, with a predominance of CD3 + T lymphocytes over monocytes and the absence of CD20 + B lymphocytes, were the main immunohistochemical findings in the case analyzed in Aguiar et al.’s work; the immunohistochemical investigations carried out by Oprinca et al. in the three cases analyzed also showed focal areas of CD3 + and CD5 + T lymphocyte infiltrates, along with scattered CD20 + B lymphocytes; focal neutrophils were also detected. Immunohistochemistry was further performed for pancytokeratin panels (CKAE1–AE3; CK-MNF116), which were found positive within hyaline membranes, thus confirming their origin from the epithelial lining; finally, positive immunoreactivity for CK7 was found in pneumocytes that underwent viral cytopathic effects. In the 10 cases evaluated in Fox et al.’s work , the inflammatory infiltrate was mainly represented by CD4 + and CD8 + T lymphocytes, predominantly detected within the interstitial spaces and around larger bronchioles and blood vessels. CD4 + T lymphocytes appeared in aggregates surrounding small vessels, in some of which platelets and small thrombi were also detected. In this case also, desquamated type 2 pneumocytes showing viral cytopathic effects (cytomegaly, enlarged nuclei) were present. In the work by Duarte-Neto et al. , the immunohistochemical assay carried out on lung samples obtained from 10 COVID-19 cases revealed a difference in the abundance of both CD4 + and CD8 + T lymphocytes depending on the DAD phase: few CD20 + B lymphocytes were observed in all cases, while CD4 + and CD8 + T lymphocyte infiltrates ranged from scarce in cases with exudative DAD, to moderate in cases with fibroproliferative DAD. CD57 + NK cells were in all cases, while CD68 + macrophages were mostly distributed in the alveolar spaces and within fibroproliferative areas. The evaluation of the immune infiltrate in the lungs of the 38 cases evaluated by Carsana et al. highlighted the presence of a large number of CD68 + macrophages, mainly localized in the alveolar lumen, in 24 cases, while CD45 + and CD3 + T lymphocytes represented the main infiltrate within the interstitial space in 31 cases. The case series was further widened in the work from Bussani et al. , who carried out immunohistochemical evaluations of the inflammatory infiltrate in the lungs of 41 patients. The infiltrates mainly consisted of clusters of macrophages and CD8 + T lymphocytes outnumbering CD4 + cells. Lastly, Ackermann et al. compared the pulmonary immune infiltrates between a group of patients dead from COVID-19 and a group of patients dead from ARDS secondary to influenza A (H1N1) infection. They found a similar infiltrate of CD3 + T cells within precapillary and postcapillary vessel walls; among these, CD4 + T cells were revealed as more numerous in the lungs from patients with COVID-19 than in those from patients with influenza, whereas CD8 + T cells were less numerous. Sporadic CD15 + neutrophils were also detected adjacent to the alveolar epithelial lining in the COVID-19 group. These findings are in accordance with the activation of a classic respiratory virus-like infection , which is mainly a cytotoxic response ( ). The infection of the respiratory epithelial cells by SARS-CoV-2 induces a direct or indirect damage, as highlighted by the evidence of cytopathic changes such as hyperplastic, dysmorphic, and/or multinucleated type 2 pneumocytes [ , , , , , , , ]. As Antigen-Presenting Cells (APCs), the infected epithelial cells subsequently activate CD8 + T lymphocytes, which show their cytotoxic effect by releasing perforin and granzymes, thus inducing apoptosis; the cytotoxic response is also testified by the sporadic involvement of NK cells [ , , ]. On the other hand, subepithelial dendritic cells (DCs) and alveolar macrophages recognize and present viral antigens to CD4 + T lymphocytes, subsequently inducing a Th 1 and Th 17 immune response polarization. The following interaction of CD4 + T cells with B lymphocytes promotes the production of IgM, IgA, and IgG isotype virus-specific antibodies [ , , ]. In multiple studies, a reduction in peripheral blood of both CD4 + and CD8 + T cells has been observed, which mainly occurred in moderate and severe cases, also showing a correlation with SARS-CoV-2-related severity and mortality. Such a finding has then led to the postulation that the massive recruitment of T lymphocytes within the lungs (intra-alveolar lumen; alveolar septa; interstitium; perivascular) could explain the presence of lymphopenia in severe forms, thus contributing to the progression of SARS-CoV-2 infection [ , , , , , , , , , , , , , , , , ]. As for the role of neutrophils, as they have been detected in cases where bacterial superinfection was also present [ , , , , ], they do not seem to be directly involved in the COVID-19-related immune response. Taken together, these results support the role of postmortem immunohistochemical investigations in helping define the COVID-19-related immune infiltrates .
The evidence of platelet-rich thrombi and megakaryocytes in alveolar capillaries and small- and medium-sized pulmonary vessels [ , , , , , , , , , , , ], as well as thromboembolism of the main pulmonary arteries , has led to the hypothesis that both direct COVID-19 infection of the endothelial cells or the subsequent massive inflammatory response may contribute to the damage of the endothelium, thus inducing vasculopathy and endotheliitis, which further enhance the pulmonary damage [ , , , , , ]. In such a context, it has been postulated that a better understanding of the mechanisms underlying COVID-19-related vasculopathy could represent a rationale for therapies aimed to stabilize the endothelium, especially in vulnerable patients with pre-existing endothelial dysfunction (hypertension, diabetes, cardiovascular disease, etc.) . The endothelial damage of pulmonary vessels in patients with COVID-19 has been immunohistochemically demonstrated on samples obtained postmortem only in a few works ( ). In the three cases analyzed by Cîrstea et al. , the immunoreactivity to CD31+ and CD34+ endothelial cells and to collagen IV on the basal membranes revealed fragmented and discontinuous vascular profiles, in which context thrombi were also detected. In Bussani et al.’s work , vasculitis of pulmonary macro- and microvasculature was histologically detected in 10/41 cases analyzed. Of the 41 cases, 11 were immunohistochemically assayed for the COVID-19 spike protein and the endothelial activation and disfunction markers CD142 (tissue factor), CD62 (E-Selectin), and VCAM-1 (adhesion molecule). In all cases, the samples showed immunoreactivity to the above-mentioned markers, along with evidence of endothelial alterations and macro- and microvascular thrombosis. The evidence of massively high levels of a subset of cytokines—such as IL-6 and TNF-α—in critically ill patients has led to the hypothesis that, in the setting of the COVID-19-related cytokine storm, they could have a major role in enhancing the endothelial damage . Such an hypothesis has been supported by a couple of works in which the supposedly involved cytokines were immunohistochemically assayed on lung samples. Cipolloni et al. carried out immunohistochemical investigations on samples obtained from two patients dead from COVID-19, and they compared the results to those found in a control patient (a 1-month-old newborn). The findings showed that in both cases—but not in the control case—endothelial cells were infected by COVID-19 (positive immunoreactivity to COVID nucleocapsid), while thrombosis activation was demonstrated by the positive immunoreactivity to factor VIII; within the affected vessels, a positive signal to TNF-α and IL-6 was also detected. A similar investigation carried out by Nagashima et al. on postmortem lung biopsies of six COVID-19 cases showed remarkably high levels on TNF-α in alveolar septal cells and alveolar capillary cells, and high levels of IL-6, ICAM-1, and caspase-1 in the endothelial cells. Based on these results, the authors postulated that the cytokine storm that follows SARS-CoV-2 infection induces a breakdown of the endothelial glycocalyx—which normally acts as a barrier against platelet activation—thus inducing endothelial dysfunction, endotheliitis, and thrombotic events. They further postulated that the activation of caspase-1 may contribute—via activation of the inflammasome—to a further release of other pro-inflammatory cytokines, such as IL-1β and IL-18, within capillary-alveolar endothelial cells. In their work, Magro et al. also found increased levels of several cytokines related to severe COVID-19 forms (IL-6, TNF-α, IL-1β, IL-8, p38, and INF-γ) in the lungs within areas of viral proliferation; an increase in caspase-3 and programmed death-ligand 1 (PDL1) was also observed in the pulmonary endothelia containing infectious virus. Direct evidence of platelet-rich thrombi was highlighted in a few works in which CD61 immunostaining has been performed, such as in Lupariello et al.’s work , in which a diffuse platelet aggregation was highlighted in small- and medium-sized pulmonary vessels. In Hanley et al.’s work , CD61 immunostaining revealed macro- and microscopic thromboemboli, along with platelet- and fibrin-rich thrombi in alveolar capillaries, and small and medium pulmonary vessels. Fox et al. reported evidence of both platelet-producing megakaryocytes and microthrombi within the alveolar capillaries. Along with platelet-fibrin thrombi, Carsana et al. observed an increase in alveolar capillary megakaryocytes in 33/38 cases analyzed. Similar results were obtained by Rapkiewicz et al. and Menter et al. . The first ones found thrombi in pulmonary large and small vessels, while platelets were detected within the alveolar capillaries; in addition, an increase in megakaryocytes within the pulmonary microvasculature was also observed. As for Menter et al.’s work , in 5/11 cases in which immunohistochemistry for fibrin was performed, microthrombi were detected in alveolar capillaries. A possible correlation between endothelial damage and complement activation via the alternative pathway (AP) has also been postulated [ , , , ]. Such a linkage has been assessed in an interesting work from Magro et al. in which the authors detected a positive immunoreactivity to C5b-9 (membrane attack complex, MAC), C3d, and C4d in the inter-alveolar septal capillaries of all five cases analyzed, where they co-localized with anti-COVID spike protein antibodies. Furthermore, based on the discovery that mannose binding lectin (MBL) binds to the SARS-CoV-2 spike glycoprotein, the authors also postulated a possible involvement of the lectin pathway (LP) which, with a positive feedback loop, contributes to sustaining the alternative pathway activation, further enhancing endothelial damage and activation of the coagulation cascade.
Both in clinical and postmortem settings, severe forms of COVID-19 infection frequently show a multi-organ involvement, although most findings are almost unspecific, thus making it challenging to understand whether their involvement depends on a direct damage, on an excessively impaired immune response, or on thrombosis-related ischemia [ , , ]. The detection of low SARS-CoV-2 levels in organs such as the heart, liver, kidneys, and brain is consistent with a secondary involvement due to the ubiquitous expression of the ACE2 receptor . Duarte-Neto et al. classified the extra-pulmonary findings in three groups according to the possible cause: 1. findings due to comorbidities (myocardial hypertrophy and fibrosis; coronary artery disease; renal atherosclerosis; liver steatosis) [ , , , , , ]; 2. shock-related findings (acute tubular necrosis; centrilobular congestion) [ , , , , , ]; 3. findings of uncertain etiology (i.e., secondary to infection by SARS-CoV-2, systemic inflammation, or shock, such as leukocytic infiltrates and thrombosis) [ , , , , , , , , , , ]. Magro et al. carried out an immunohistochemical assessment for the deposition of C5b-9, C3d, and C4d on the lung (data previously reported), heart, liver, kidney, brain, and skin from 12 cases. The results revealed a significant endothelial and subendothelial microvascular deposition of C3d, C4d, and/or C5b-9 in all cases. Endothelial damage-related cytokines (IL6, TNF-α, IL-1β, IL-8, and p38) were also assessed, showing a significant increase in the microvascular extra-pulmonary endothelia where they strongly co-localized with both the viral spike protein and ACE2 receptor, including the skin and brain. Finally, Rapkiewicz et al. highlighted CD61 + platelet-rich thrombi also in the hepatic, renal, and cardiac microvasculature in all cases tested. Megakaryocytes were additionally observed in the cardiac and glomerular microvasculature.
In order to front the COVID-19 pandemic, between December 2020 and January 2021, four vaccines have been authorized by the European Community: BNT162b2 (Pfizer–BioNTech); mRNA-1273 (Moderna); ChAdOx1 nCov-19 (AstraZeneca); COVID-19 Vaccine Janssen (Johnson & Johnson). Despite a positive ratio between benefits and risks, the vaccination with ChAdOx1 nCov-19 has been a source of diffidence because of the development—within one to three weeks following vaccination—of severe vascular adverse reactions temporally related to the vaccine administration (according to the EMA report of 7 April 2021: 169 cases of cerebral veins thrombosis, 53 cases of abdominal veins thrombosis, and 18 fatal cases, on around 34 million vaccinated people in the EEA and UK ). In order to produce a workflow aimed to define the relationship between Adverse Events Following Immunization (AEFI) and COVID-19 vaccination, Pomara et al. carried out postmortem investigations on two otherwise healthy subjects, respectively, dead after 19 and 24 days following vaccination with ChAdOx1 nCov-19. Prior to death, on admittance to the Emergency Department, both presented severe thrombocytopenia, low plasma fibrinogen, and very high levels of D-dimer; on CT examination, case 1 showed occlusive portal vein thrombosis with smaller thrombi in the splenic and upper mesenteric veins, and a massive intracerebral hemorrhage, while case 2 showed a very large intracranial hemorrhage. Along with gross and histologic investigations—which confirmed the antemortem CT findings—the authors also performed immunohistochemistry in order to characterize the immune infiltrates (CD163, CD66b), evaluate the presence of the adhesion molecule VCAM-1, and verify the activation of the complement pathway (C1r, C4d) and the deposition of anti-Platelet Factor 4 (PF4) antibodies. Strong positivity for adhesion molecules (VCAM-1), activated inflammatory cells (CD66b + , CD163, CD61 + ) expressing the complement fraction C1r, and anti-PF4/polyanion-antibodies was detected on vascular and perivascular tissues of the heart, lung, liver, kidney, ileum, and deep veins. Such findings not only led the authors to confirm a causal link between vaccination with ChAdOx1 nCov-19 and the development of immune thrombocytopenia mediated by platelet-activating antibodies against Platelet Factor 4 (PF4), but also allowed the production of a few hypotheses on its pathogenicity. Among these, the most credited is that of the production of antibodies that recognize Platelet Factor 4—involved in blood clot formation—probably following the exposure to polyanionic substances that are part of the vaccine composition, thus mimicking a heparin-induced autoimmune thrombocytopenia (HIT). The above considerations integrate the research in clinical settings performed on patients with severe vascular adverse reactions to the ChAdOx1 nCov-19 vaccine . In fact, the more shared evaluation regarding the pathogenesis of vaccine-induced immune thrombotic thrombocytopenia (VITT) suggests that an adenoviral vector vaccine can trigger an immune response leading to highly reactive anti-PF4 (anti-platelet factor 4) antibodies activating platelets through their FcγRIIa receptors (CD32) [ , , , ]. Particularly, this receptor is distributed on several cells such as platelets, monocytes, macrophages, neutrophils, natural killer cells, and endothelial cells, and studies described specific CD32 polymorphism (i.e., 131 Arg-His heterozygous or 131-His-His homozygous phenotypes) associated with an enhanced activation of platelets in HIT . Greinacher et al. suggested the sequence of events ( ) starting through the interaction of vaccine constituents (i.e., polyanions, such as glycosaminoglycans, polyphosphates, or DNA) with platelets resulting in platelet activation that release PF4; PF4 binds vaccine constituents forming multimolecular aggregates; vaccine EDTA increases capillary leakage and vascular permeability with blood dissemination of vaccine proteins (i.e., virus proteins, human originating proteins); an inflammatory signal is generated, stimulating the immune response, probably associated with immune complexes (vaccine constituents including its complexes with PF4 and preformed natural IgG); antibody anti-PF4 production is promoted by preformed B-cells stimulation; PF4/IgG immune complexes activate platelets releasing additional PF4, together with crosstalk with neutrophils determining NETosis and a prothrombotic response. Moreover, on the basis of previous studies describing the HIT pathogenicity, it can be supposed that, in addition to platelets, antibodies can induce activation of endothelial cells, natural killer cells, and monocytes releasing tissue factor (TF), which contribute to thrombosis .
Since its outbreak, COVID-19 has rapidly spread throughout the world, causing high mortality and morbidity rates . The knowledge of the mechanisms used by the virus to infect the target cells has recently led to the production and distribution of several vaccines as a prevention strategy . Nonetheless, also due to a partial adhesion to the vaccination campaign, COVID-19 has continued to affect thousands of people worldwide, also causing the emergence of stronger variants. In light of this, an accurate comprehension of the pathophysiological mechanisms underlying the evolution of COVID-19 disease, especially in critical patients, is of utmost importance in order to find effective treatment strategies. To this end, a multidisciplinary approach is needed, comprehensive of clinical, biochemical, radiologic, biomolecular, and forensic investigations, each representing a piece of the whole puzzle [ , , ]. Within a forensic context, autopsies have proved helpful in achieving a correct diagnosis in otherwise uncertain cases, assessing whether a patient was dead “from” or “with” COVID-19, thus providing reliable epidemiological, pathological, and global health data . Furthermore, the possibility to directly analyze each single organ has helped identify the main pathological changes induced by the viral infection, namely DAD and thrombotic macro- and microangiopathy. Based on the summarized results, the reliance on immunohistochemistry as an ancillary technique in a postmortal setting has proved useful in the comprehension of the main cellular infiltrates and mediators recruited in the mostly affected organs—the lungs. The comparison of these results with the findings from other works, carried out in a clinical setting or in which biochemical/biomolecular techniques were used, allowed the comprehension of the main pathophysiological mechanism of COVID-19 infection, namely the onset of a hyperinflammation state mediated by a cytokine storm-induced cytotoxic and T helper response primarily affecting the pulmonary parenchyma and vasculature; the massive activation of the immune system and microvascular damage might also be responsible for the indirect damage caused to other organs, even if a direct viral effect cannot be excluded. Such aspects, further investigated at a molecular level, could represent the target for selected therapies aimed to either block the virus entry into the target cells, shut the hyperimmune response, or stabilize the endothelium/inhibit platelet activation and aggregation. Lastly, the immunohistochemical evidence of anti-PF4 antibodies’ co-localization with inflammatory cells, platelet, and complement mediators in patients dead for thrombotic complications following ChAdOx1 nCov-19 administration integrates the clinical evidence contributing to improve the knowledge on the pathophysiology of VITT. However, an increase in case studies is needed for a better definition of such mechanisms, also evaluating the possible relation between subject propensity to develop the VITT and genetic factors (i.e., polymorphism of CD32). In conclusion, postmortem research, along with clinical studies, is a useful tool for understanding the pathogenesis and pathophysiology of both COVID-19 and VITT, making a contribution in order to benefit global public health.
|
The “autopsy” enigma: etymology, related terms and unambiguous alternatives | 24672847-4b8a-4544-86e0-214e672a5c2a | 11790747 | Forensic Medicine[mh] | Three quarters of contemporary English medical terminology is estimated to be of Greek origin; unsurprising, given the pioneering impact on modern medicine from 500 B.C. classical Greece . Until relatively recently, linguistic contact between living Greek and English languages was not possible, and so lexical diffusion was necessarily indirect. Vocabulary items were mostly borrowed through Latin, via written media and daughter languages (the Romance languages, particularly French), or from Ancient Greek texts. The concerted use of Greek-derived medical terms in the present day allows us to facilitate effective communication while honouring the historic roots of Western medicine. One such medical term now more commonly represents a procedure that directly contradicts its original intended sense. As a result, the word autopsy has, throughout history, bewildered death investigation stakeholders. Its continued use in the decision-making process for how invasive a postmortem examination ought to be may confuse and alienate families at a time where clarity is exceptionally important. How are we meant to counsel and consent the deceased’s next-of-kin if we, as death investigators, cannot agree on definitions for the very procedures we are proposing? This review explores the etymological journey of autopsy , considers which related terms have been popularised throughout history, introduces the concept of lexical ambiguity, and suggests unambiguous alternatives to satisfy a recent appetite for clarity in international professional and next-of-kin communication, as discussed by previous authors . The term autopsy derives from its third century B.C. Hellenistic Greek etymon αὐτοψία ( autopsia , “to see for oneself”); an amalgamation of αὐτός ( autos , “oneself”) and ὄψις ( opsis , “sight; view”) . Αὐτοψία at this time vaguely denoted the self-inspection of something, without physically touching it. The object being inspected or observed could be virtually anything, and was certainly not restricted to deceased human bodies. It was used in a literal sense to portray self-inspection by Galen ( Κλαύδιος Γαληνός ; 129–216 A.D.) in his seminal text, later translated into the Latin De Anatomicis Administrationibus . The Byzantine Greek αὔτοπτος was used until 1453 and subsequently borrowed into Neo-Latin as autopsia . Autopsia came to reference those observations made on live patients by a physician for the purposes of diagnosis, contrasting with historia (denoting information supplied by patients themselves) . It was much later when the phrase autopsia cadaverum (“autopsy of cadavers”, with variants like autopsia cadaverica ) was written into several Latin medical texts, including the 1765 Synopsis Universae Praxeos-Medicae of the French physician Joseph Lieutaud . Autopsia transitioned into the Middle French autopsie ; attested 1573 from a source cited in Desmaze’s Curiosités des anciennes justices (though the context does not make the precise sense clear) . Autopsie is again attested 1665, without context, in a list of scientific terms used in the unpublished letters of a seventeenth century French physician . Authoritative dictionaries have assigned these instances to the sense “postmortem examination” . However, given the lack of source context, widespread religious prohibition to human dissection pre-eighteenth century, and the infrequency with which the sense “postmortem examination” was referenced at the time, it seems probable that in at least one of these two instances the author(s) meant “careful visual examination of a living patient”. The French autopsie underwent semantic narrowing from the passive “self-inspection of something without touching”, to a purposeful action by an operator performing “an examination of the human body itself”, to specifically “dissection of a dead human body” . This curious turning point for the meaning of autopsie created an auto-antonym: the same word now has multiple meanings, of which one is the reverse of another. The French autopsie used in the latter sense predates that documented for the English autopsy , Spanish autopsia , Italian autopsia and German autopsie ; although attestations are rare in all languages before the beginning of the nineteenth century . Perhaps as a result of the lexical ambiguity of autopsie , attempts were made to remedy the discrepancy between conflicting senses either by adding a determining adjective to the existing noun (the popular autopsie cadavérique is attested 1801, and the rarer autopsie cadavéreuse 1821), or by creating the newer nécropsie to specifically denote “an examination of a corpse” (attested 1826). However, the latter has never succeeded in supplanting autopsie . Use of the English autopsy as applied specifically to “an examination of a dead human body” is attested 1829, when von Ruhl, Creighton and Bluhm made an account of the case of the Empress Feodorovna of Russia . The term was accepted by 1881, at which point the New Sydenham Society’s Lexicon for that year reads “it has of late been used to signify the dissection of a dead body” . In the same text, autopsy appears alongside autopsia (“self-inspection; evidence actually present to the eye”) and the elaborative autopsia cadaverica (“a post-mortem examination”). Pepper’s 1949 Medical Etymology describes autopsy aptly as “a curious term” . The current autopsy definition varies according to the source. It can be a noun (i.e. the examination process ), a transitive verb (i.e. the examination act ) or an adjective (i.e. describing someone or something that has undergone an autopsy ). The following are excerpts from nine authoritative English dictionaries, defining the former word class: au●top●sy, noun. ˈɔː.tɒp.si. The American Heritage Dictionary of the English Language : Examination of a cadaver to determine or confirm the cause of death. A critical assessment or examination after the fact. Cambridge Advanced Learner’s Dictionary : The cutting open and examination of a dead body in order to discover the cause of death. The Chambers Dictionary : A postmortem. Any dissection and analysis. Collins English Dictionary : Dissection and examination of a dead body to determine the cause of death. An eyewitness observation. Any critical analysis. Longman Dictionary of Contemporary English : An examination of a dead body to discover the cause of death. Macmillan Dictionary : A medical examination of a dead person’s body to find out why they died. The Merriam-Webster Dictionary : An examination of a body after death to determine the cause of death or the character and extent of changes produced by disease. A critical examination, evaluation, or assessment or someone or something past. Oxford English Dictionary : The action or process of seeing with one’s own eyes; personal observation, inspection, or experience. Examination of the organs of a dead body in order to determine the cause of death, nature and extent of disease, result of treatment, etc.; a post-mortem examination; an instance of this. A critical examination or dissection of a subject or work. Random House Kernerman Webster’s College Dictionary : The inspection and dissection of a body after death, as for the determination of the cause of death. A critical analysis of something after it has taken place or been completed. As is exemplified above, some lexicographers attempt to capture a physical act with phrases like “examination of the organs” and “cutting open”, while others fixate on the outcome: “to determine the cause of death” or “changes produced by disease” . These definitions would infer that the primary aim of the autopsy is to determine the cause of death, and there is no mention as to how this might be achieved apart from cutting or dissecting. None of the aforementioned definitions for autopsy represent fully the diversity of postmortem procedures for the purposes of death investigation. For instance, the postmortem examination does not necessarily involve entering the body in any way, and its aim is not always to find a cause of death either: amongst other things, they help to determine viability in infants, manner of death and post-mortem interval; they facilitate identification and organ retrieval; and can be used for research purposes. In short, one might make a postmortem examination of varying invasiveness in order to answer several different questions from a range of stakeholders. Forensic pathology texts use the word autopsy frequently, some exclusively, with authors providing their own definitions. Knight refers to the autopsy as “an innately destructive process [that] can cause artifacts”; Dolinak writes “the autopsy consists of an external examination, followed by internal examination of the organs”; and Prahlow describes “a surgical examination performed on a dead body… involves opening the abdomen, chest, and head to examine and then remove the organs for dissection, with or without subsequent examination of microscopic sections” . The Human Tissue Authority, National Health Service and Royal College of Pathologists all define autopsy vaguely as “an examination of a body after death” . In contrast to the English interpretation of autopsy , Greek forensic practitioners use their translated equivalent αυτοψία to refer to any careful examination, without destroying evidence, of the crime or death scene . This interpretation is a more literal one; a testament to the relatively direct evolution from Ancient to Modern Greek language. Nowadays, autopsy occurs between 1 and 10 times per million words in typical modern English usage, along with other words which are considered to be distinctively educated, while not being overly technical or jargon (example nouns at a similar frequency include surveillance , assimilation and paraphrase ) . Since the early nineteenth century, attempts have been made to remedy the discrepancy between conflicting senses either by adding determining adjectives to the existing noun, or by substituting autopsy with another word altogether, although none have succeeded in surpassing its popularity for over a century (Fig. ). The term postmortem examination is an example: a borrowing from Classical Latin post (“after”) and mortem , accusative of mors (“death”), attested 1834 . The term is frequently shortened simply to postmortem , and may be hyphenated or unhyphenated for the sense “examination of a dead body” (although the latter is not also used for the “after death” adverb form). Knight remarks “the term ‘ post-mortem examination’ is a common alternative, especially in Britain, where its meaning is never in doubt. Unfortunately, it suffers from a lack of precision about the extent of the examination, for in some countries many bodies are disposed of after external examination without dissection” . However, one may argue that the word autopsy provides even less information about the content of the examination, given its original sense “self-inspection of something without touching it” and current polysemy. Knight observed the relative popularity of postmortem examination over autopsy in Britain; use of the former was preferred between the 1830s and 1930s in British English compared with American English texts, as represented by Fig. . Substitutions of autopsy for postmortem examination were common: the 1885 English translation of Virchow’s Die Sections-Technik preferred the term postmortem examination over autopsy , and similarly Hektoen in his 1894 The Technique of Post-mortem Examination . Nowadays in the United Kingdom, statutory and regulatory bodies tend to either offer vague, overarching definitions for autopsy , or replace it altogether with postmortem examination , as has been the case with recently amended Home Office publications . UK Government legislation makes no reference to the autopsy , and instead refers only to postmortem examinations . This is epitomised by Acts governing activities involving human tissue , and those involving the authorisation of postmortem examinations by judicial officers . A contributor to JAMA’s 23 rd issue in 1901 poses a dilemma presented to the US Circuit Court in Kentucky, illustrating the importance of accurate language in these circumstances : when a person taking out a life insurance policy permits a medical advisor to examine the body after death, does this give the company the right to make an invasive postmortem examination ? Indeed, the court “did not think that any ordinary person would suppose that they were agreeing to what would have been much more clearly expressed by the word ‘ autopsy ’ or by the word ‘ dissect ’… While an autopsy , generally speaking, always includes an examination , the court does not think that an examination always includes an autopsy ”. Another term that overtook postmortem examination in popularity from the 1910s was necropsy (attested 1842), which was formed in English by compounding necro- (“death”) and -opsy (“visual inspection”); probably modelled on the aforementioned French nécropsie . Pepper’s Medical Etymology describes necropsy simply as “a better term than autopsy ” . Knight writes “though ‘ necropsy ’ is semantically the most accurate description of the investigative dissection of a dead body, the word ‘ autopsy ’ is used so extensively that there is now no ambiguity about its meaning” . Necropsy is also considered a more general term without reference to species . Autopsy in its early sense “self-inspection” led many to believe that the frame of reference for “self” was “ourselves”; i.e. our own species, humans. As such, the postmortem examination of a non-human was proscribed from using the term and instead designated a necropsy . However, the current meaning of necropsy is subject to similar criticism as autopsy : strictly, the word portrays “inspection of a dead body”, but is more often used in the context “dissection of a dead body”. In contrast to its English interpretation, Greek forensic practitioners use their νεκροψία to denote an observation of the intact (not yet dissected) deceased . In Greece, the necropsy would be considered synonymous with the non-invasive or external-only postmortem examination . Necrotomy is a compound of necro- (“death”) and -otomy (“dissection”), and is seldom used in English . The Greek equivalent νεκροτομία is used to denote “dissection of a dead body”, and is considered synonymous with the invasive or internal postmortem examination . Several other modern words now use the autopsy root to describe various forms of postmortem examination , and their quantity reflects the sheer variability in procedures. The least invasive is the so-called verbal autopsy (“a method used to ascertain the cause of a death based on an interview with next of kin or other caregivers”); a juxtaposition, given that no examination of the body is actually undertaken, and which Burton suggests would be better represented by postmortem clinical case review . Pathological examinations have embraced new technologies, and non-invasive postmortem examinations are often supplemented with various imaging modalities. The so-called virtopsy is a portmanteau of virtual and autopsy , and is a trademark registered to Dirnhofer; the former head of the Institute of Forensic Medicine at the University of Bern, Switzerland . A similar buzzword echopsy describes a modified needle autopsy technique with ultrasonography . Where a postmortem examination does not provide a satisfactory answer for the cause of death, the term negative autopsy is sometimes used. The use of genetic analytic techniques to determine the cause of death in these unexplained cases is represented by the term molecular autopsy ; first proposed 20 years ago . Indications for postmortem procedures also vary. In England and Wales, there are two fundamental types of postmortem examination : hospital and coronial (usually subdivided into routine coronial and forensic cases). The hospital invasive postmortem examination rate was 0.51% of all deaths in England and 0.65% of all deaths in Wales in 2013 . Routine coronial and forensic invasive postmortem examinations were performed in 16% and 0.8% of deaths in the same year, respectively . Confusingly, the vast majority of postmortem examinations instructed by the coroner are performed in a hospital mortuary by histopathologists who are also employed by the National Health Service. The term coronial strictly means “relating to a coroner”, and therefore any postmortem examination authorised by a coroner is, in essence, coronial . However, in England and Wales, coronial cases tend to refer to those that are not forensic . The word forensic derives from Classical Latin forēnsis (“of or belonging to the Forum; of or connected with the law courts”) and its current definition has largely retained this meaning (“of, relating to, or associated with proceedings in a court of law”) . According to this definition, one would expect the forensic postmortem examination to automatically describe any qualifying coroner-requested procedure, as is the case in almost every other country with an established forensic pathology service, including Scotland (the Procurator Fiscal distinguishes between those cases likely to progress to court and those not, named according to the statutory requirement for corroboration in Scots law: one-doctor or two-doctor postmortem examinations ) . In England and Wales, the routine coronial and forensic postmortem examinations are distinguished by the cost to the coroner, requirement for a Home Office registered forensic pathologist to perform the procedure, and a higher level of scrutiny with the expectation that the case will be heard in court. To complicate things further, hospital postmortem examinations are sometimes referred to as consented , and their coronial counterpart as non-consented , given that informed consent is not mandatory in coronial cases. However, families must be notified and will likely be counselled on the advantages and disadvantages of a postmortem examination as applied to an individual case, and may be asked for their “consent” in the sense that the coroner should pay appropriate respect to families’ held religious and cultural wishes with regards to the treatment of the deceased body. When deciding how to deploy language in daily conversation or written literature, a decision must be made: is accurate communication more important than ease or tradition? Should we honour words that are common but misleading? An estimated 80% of common English words have multiple related dictionary senses, but the word autopsy is antilogous: it represents multiple senses, at least one of which (“self-inspection”) is almost the reverse of another (“dissection of a dead body”) . Because of this, a reader/listener must first decipher exactly which definition is intended to understand any sentence containing the word. This “disambiguation” process involves encountering an ambiguous word, rapidly and automatically retrieving in parallel all known meanings (“exhaustive access”), and then selecting the single meaning that is most likely to fit with that particular context . The most comprehensively-studied and best understood brain regions responsible for this process are the posterior and middle subdivisions of the left inferior frontal gyrus (eponymous “Broca’s Area”) . For words with multiple senses, there may either be a so-called "ambiguity advantage" (ambiguous words with multiple related senses are quickly and accurately accessible, conferring faster visual lexical decisions when compared with unambiguous words) or an “ambiguity disadvantage” (multiple unrelated meanings lead to slower visual lexical decisions in the same experiments) . At present, there are no published studies investigating which term denoting human dissection is easiest to contextualise, and whether the word autopsy confers an “ambiguity advantage” or “disadvantage” relative to its counterparts. The widespread use of ambiguous language when referring to postmortem procedures will likely lead to skewed perceptions of the general public towards them. The most common sources of postmortem examination -related information in the UK are television and mainstream media, so the beliefs held by the public are perhaps unsurprising: 97% of people in a Sheffield-based sample believed that " post-mortems " involved “examining the inside of the body” whereas only 84% acknowledged that they involved “examining the outside of the body”, demonstrating a relative ignorance to less-invasive techniques . Recent studies have highlighted the contribution of recent exposure on disambiguation, demonstrating that we are biased to select recently-encountered meanings . So, while the word autopsy may strictly refer to any postmortem examination (ranging from inspection to dissection), this principle of “word-meaning priming” means that, because the general public are exposed to the word autopsy in the sense “dissecting a dead body” more than “inspecting a dead body” from television or media, they may be more likely to favour the more invasive meaning in any given situation. Instead of using the autopsy noun with hospital , coronial and forensic adjectives, it is perhaps more useful for families to define a procedure by: (i) who requested the postmortem examination , (ii) for what purpose, and (iii) who intends to perform the postmortem examination . For instance, “a non-invasive postmortem examination and computed tomography scan requested by a coroner to determine a cause of death, performed by a Home Office registered forensic pathologist” or “an invasive postmortem examination requested by a consultant cardiothoracic surgeon to understand the pathophysiology of known surgical complications, performed by a histopathologist”. The definitions in Table would preserve tradition and communication by offering a more logical, sensible lexicon for pathologists performing postmortem procedures, and normalise using universally understood language for bereaved families. Language standardisation is the process by which conventional forms of a language are established and maintained . A standard language typically arises either: (i) without formal government intervention, as is the case with Standard English; or (ii) after being formally prescribed by language authorities, such as the French Académie Française and Spanish Real Academia Española . Given the poor standardisation of English words denoting postmortem procedures (particularly across state and private dictionaries, forensic pathology texts, and individual institutions), a degree of language planning may be necessary to improve communication. Language planning in this context, amongst other factors, involves balancing lexical ambiguity, word familiarity, frequency of use, similarity with other languages and tradition. The apparent success of codification depends largely on its acceptance by a population as well as its implementation by Government and authoritative bodies. The term postmortem examination is already preferentially used in key UK legislation relating to death investigation and human tissue handling. For pathologists, the proposed lexicon (Table ) may be used in reports, during court proceedings, and in communications with lay-people and experts alike. For researchers, standard terms may be used in published material, so as to reduce uncertainty about the scope and extent of postmortem procedures, and to facilitate research communication globally. The word autopsy evolved from its Hellenistic Greek etymon αὐτοψία (“to see for oneself”), and progressed through its Neo-Latin and French forms: autopsia and autopsie , respectively. Only relatively recently has the English word been attributed to the sense “dissection of a dead body”, and since this time it has confounded lay and professional understandings of postmortem investigative procedures. Those working within the death investigation sphere should be aware of the uncertainties surrounding this confusing terminology, and use appropriate, accurate language to describe the procedures they are counselling and consenting families on. The historical and geographical variability of autopsy also makes the term unsuitable for communication on an international stage. There have been conscious efforts by policymakers and death investigators to replace the term with unambiguous English compound ( necropsy and necrotomy ) and Latin-derived ( non-invasive and invasive postmortem examination ) alternatives to satisfy a recent appetite for clarity in international professional and next-of-kin communication. The word autopsy underwent significant semantic change over the course of history. Modern definitions of autopsy are greatly variable, and differ from its original sense. Autopsy definitions misrepresent the diversity of postmortem procedures, such that alternative nouns and determining adjectives are needed for clarity. There have been efforts to replace the term with unambiguous alternatives. Using standard language improves international professional and next-of-kin communication. |
Overall survival benefits of cancer drugs in the WHO Model List of Essential Medicines, 2015–2021 | 0aa4e690-b157-4668-95ff-6421223a9e01 | 10546158 | Internal Medicine[mh] | From 2015 to 2021, 22 targeted cancer drug indications were recommended for inclusion in the WHO EML. For 68.2% (n=15), WHO reviews and 31.8% (n=7), pivotal trials in Food and Drug Administration-approved labels had document overall survival (OS) benefit at the time of EML inclusion decisions. Of 11 targeted cancer drug indications recommended for inclusion since implementation of magnitude of benefit criteria in 2019, 54.5% (n=6) and 9.1% (n=1) had evidence of OS benefit >4 months in WHO-Technical Report Series and in pivotal trials, respectively; 45.5% (n=5) met European Society for Medical Oncology-Magnitude of Clinical Benefit Scale criteria.
Our findings highlight opportunities for improving the application of clinical benefit criteria and for better documenting rationales for cancer drug listings in the WHO EML.
Cancers cause worldwide morbidity and mortality, affecting over 19 million individuals and leading to nearly 10 million deaths in 2020, with a disproportionate death toll in low-income and middle-income countries (LMICs). Over the past half-century, better understanding of the biology of cancers has led to development of new cancer treatments, some of which have greatly improved the survival of cancer patients in high-income countries. The situation differs for patients in LMICs who have limited access to advanced cancer care, including diagnostics, cancer drugs and well-trained personnel, and well-equipped facilities. In middle-income countries where the services and facilities may exist, access to medicines and opportunities for better outcomes remain limited to those who can pay for the highly-priced treatments. Since 1977, the WHO publishes and updates every 2 years the List of Essential Medicines (EML). The WHO EML is intended as a guide for countries and regional authorities, especially in low-income and middle-income settings, to design national essential medicines lists for medicines approval, procurement and reimbursement decisions. The original WHO EML recommended six cancer drugs, and new cancer drugs were added in 1984, 1995 and 1999. Given the discrepancy in cancer burden between high-income and LMICs and advances in the treatment of some cancers in high-income countries, there was a strong call for narrowing the gap in access to cancer drugs worldwide. Compared with other classes of drugs, the selection process of cancer drugs has been more challenging due to the large volume of newly developed drugs approved rapidly with uncertain benefits and marketed with high and increasing prices. To ensure the clinical benefits of the recommended cancer drugs in EMLs, the WHO has launched a series of evidence-based updates. In 2014, WHO commissioned the Union for International Cancer Control to undertake a comprehensive review of cancer drugs in the 18th EML published in 2013 and of new medicines proposed for inclusion by researchers and organisations. ‘Meaningful improvements in overall survival (OS) compared with the existing standard of care’ was a criterion for the 2015 additions of new, highly priced targeted cancer drugs. Different from traditional chemotherapy, target-specific proteins that control cancer cells’ growth and spread. Targeted cancer drugs constitute the majority of newly approved cancer therapies and since 2015, an increasing number of cancer drugs have been recommended for inclusion on the WHO EML. Magnitude of benefit was one of the criteria considered since the 2015 cancer drug listings and quantified in 2018 in two metrics: (1) a threshold for OS benefit of at least 4–6 months and (2) a score on the European Society for Medical Oncology-Magnitude of Clinical Benefit Scale (ESMO-MCBS) of A or B in the curative setting and of 4 or 5 in the non-curative setting. These criteria have been recommended for the 2019 and 2021 (21st and 22nd) WHO EMLs. There is debate about the clinical benefit of new cancer drugs which often are approved based on surrogate outcome measures or on pivotal studies that do not permit inference about clinical benefit. Despite WHO proposed two specific criteria for selecting cancer drugs, lack of fidelity may occur because these are guiding principles for selection, among other criteria. However, WHO’s goal is to list only drugs with meaningful clinical benefit and these adopted guiding principles are important to achieve this goal. To our knowledge, no studies have examined the documented clinical benefit of targeted cancer drugs in the WHO EML or how approval decisions for the latest WHO EMLs align with WHO’s recent magnitude of benefit criteria for selecting cancer drugs. We address these knowledge gaps by assessing documented clinical benefits of WHO-EML cancer drugs. Our specific aims are to (a) assess documented OS benefit for targeted cancer drugs proposed for EML inclusion since 2015 and assess OS benefit magnitude and ESMO-MCBS scores for targeted cancer drugs proposed for listing in the WHO EML since 2019 and (b) assess the consistency of latest listing decisions with WHO criteria for WHO EML cancer drugs.
Data sources The WHO Technical Report Series (TRS) and the WHO electronic EML database were used to identify the applications for listing of targeted cancer drug indications. The WHO TRS documents were used to retrieve basic information and clinical benefit data documented in EML applications. The Drugs@FDA database was used to retrieve evidence of OS benefits in pivotal trials and the ESMO-MCBS website was used to extract ESMO-MCBS scores for indications proposed for listing. Study sample The unit of analysis for this study was the targeted cancer drug indication. We identified applications for targeted cancer drug indications intended for inclusion in the WHO EML based on the final reports of meetings of the WHO expert committee in 2015, 2017, 2019 and 2021, as documented in the WHO Technical Report Series (TRS), Section 8.2. Our study period corresponds to the recent increase in the number of targeted cancer medicines considered for listing in the WHO EML. In TRS Section 8.2, applications included not only targeted cancer drug indications, but also cytotoxic medicines, hormones and antihormones, and supportive cancer care medicines. We used the WHO electronic EML database ( https://list.essentialmeds.org/ ) which allowed us to identify eligible applications of targeted cancer drug indications (8.2.2 Targeted therapies and 8.2.3 Immunomodulators). Applications for new formulations of already listed drugs or applications for reinstatement were not included in the analysis. For each application for listing of targeted cancer drug indications, we extracted relevant information from two parts of the WHO-TRS: (1) ‘Review of benefits and harms’ (2015) or ‘Summary of evidence: benefits (from applicants)’ (2017, 2019 and 2021) and (2) ‘Recommendations’ (2015) or ‘Committee recommendations’ (2017, 2019 and 2021). Since clinical benefit data shown in pivotal trials is crucial evidence for supporting the use of cancer drugs, and if it exists, US Food and Drug Administration (FDA) labels list the evidence in pivotal trials, we also gathered this information from the publicly available Drugs@FDA database ( https://www.accessdata.fda.gov/scripts/cder/daf/index.cfm ). We retrieved the most recent FDA-approved labels at the time of WHO listing decisions and reviewed section 14 ‘CLINICAL STUDIES’ to extract clinical benefit data. We extracted ESMO-MCBS scores based on the trials cited in WHO-TRS from the publicly available ESMO-MCBS website. Measures OS benefit and ESMO-MCBS scores were used as indicators of clinical benefit. We extracted information on study design (study type, trial group, control group) and OS results by reviewing all references cited in the ‘Review of benefits and harms’ (2015) or ‘Summary of evidence: benefits (from applicants)’ (2017, 2019 and 2021) of WHO-TRS documents and in section 14 ‘CLINICAL STUDIES’ of FDA approved drug labels. Cancer drug indications with statistically significant OS results were categorised as having documented evidence of OS benefit. We categorised cancer drug indications with unknown or unavailable documented evidence of OS benefit if (1) trial results were not statistically significant, if (2) OS results were not reported or could not be calculated or if (3) the FDA-approved drug label was unavailable, or the drug was not approved by FDA. Based on the trials cited in WHO-TRS, we further extracted the highest score for the proposed indications from ESMO-MCBS website. Cancer drug indications with an ESMO-MCBS score of A or B in the curative setting and of 4 or 5 in the non-curative setting were categorised as meeting the EML selection criterion. We categorised cancer drug indications as not meeting the criterion if (1) the cancer drug indications could not be found on the website, or (2) the trials cited by WHO-TRS were not used by EMSO-MCBS for score evaluation. Data analysis We assessed WHO listing decisions since 2015 with respect to evidence of OS benefit for the cancer drug indications as described in WHO-TRS. We also assessed 2019 and 2021 decisions with respect to evidence of magnitude of OS benefit >4 months (a median gain in OS benefit in the treatment arm of more than 4 months compared with that in the control arm) and ESMO-MCBS scores A or B (curative) or 4 or 5 (non-curative). Then we compared the availability of evidence of OS benefit extracted from WHO-TRS and pivotal trials (as obtained from FDA-approved labels). We noted if one source had documented evidence of OS benefit while the other did not. We then assessed the evidence of OS benefit for the same cancer drug indications which were applied more than once to examine whether new evidence was added in later applications. We further conducted a content analysis to assess how WHO-TRS communicated the evidence supporting listings, especially for those indications that did not have documented evidence of OS benefit. We also noted whether the rationales underlying WHO inclusion decisions were explicitly stated in the ‘Recommendations’ (2015) or ‘Committee recommendations’ (2017, 2019 and 2021) sections, and whether WHO provided a structured summary based on the selection criteria. We conducted descriptive analyses of cancer drug indication applications across the four most recent WHO EMLs. We further analysed the selection of targeted cancer drug indications in terms of OS benefit based on WHO-TRS and pivotal trials (as reported in FDA-approved drug labels). Patient and public involvement Patients or the public were not involved in the design, conduct, reporting or dissemination plans of our research.
The WHO Technical Report Series (TRS) and the WHO electronic EML database were used to identify the applications for listing of targeted cancer drug indications. The WHO TRS documents were used to retrieve basic information and clinical benefit data documented in EML applications. The Drugs@FDA database was used to retrieve evidence of OS benefits in pivotal trials and the ESMO-MCBS website was used to extract ESMO-MCBS scores for indications proposed for listing.
The unit of analysis for this study was the targeted cancer drug indication. We identified applications for targeted cancer drug indications intended for inclusion in the WHO EML based on the final reports of meetings of the WHO expert committee in 2015, 2017, 2019 and 2021, as documented in the WHO Technical Report Series (TRS), Section 8.2. Our study period corresponds to the recent increase in the number of targeted cancer medicines considered for listing in the WHO EML. In TRS Section 8.2, applications included not only targeted cancer drug indications, but also cytotoxic medicines, hormones and antihormones, and supportive cancer care medicines. We used the WHO electronic EML database ( https://list.essentialmeds.org/ ) which allowed us to identify eligible applications of targeted cancer drug indications (8.2.2 Targeted therapies and 8.2.3 Immunomodulators). Applications for new formulations of already listed drugs or applications for reinstatement were not included in the analysis. For each application for listing of targeted cancer drug indications, we extracted relevant information from two parts of the WHO-TRS: (1) ‘Review of benefits and harms’ (2015) or ‘Summary of evidence: benefits (from applicants)’ (2017, 2019 and 2021) and (2) ‘Recommendations’ (2015) or ‘Committee recommendations’ (2017, 2019 and 2021). Since clinical benefit data shown in pivotal trials is crucial evidence for supporting the use of cancer drugs, and if it exists, US Food and Drug Administration (FDA) labels list the evidence in pivotal trials, we also gathered this information from the publicly available Drugs@FDA database ( https://www.accessdata.fda.gov/scripts/cder/daf/index.cfm ). We retrieved the most recent FDA-approved labels at the time of WHO listing decisions and reviewed section 14 ‘CLINICAL STUDIES’ to extract clinical benefit data. We extracted ESMO-MCBS scores based on the trials cited in WHO-TRS from the publicly available ESMO-MCBS website.
OS benefit and ESMO-MCBS scores were used as indicators of clinical benefit. We extracted information on study design (study type, trial group, control group) and OS results by reviewing all references cited in the ‘Review of benefits and harms’ (2015) or ‘Summary of evidence: benefits (from applicants)’ (2017, 2019 and 2021) of WHO-TRS documents and in section 14 ‘CLINICAL STUDIES’ of FDA approved drug labels. Cancer drug indications with statistically significant OS results were categorised as having documented evidence of OS benefit. We categorised cancer drug indications with unknown or unavailable documented evidence of OS benefit if (1) trial results were not statistically significant, if (2) OS results were not reported or could not be calculated or if (3) the FDA-approved drug label was unavailable, or the drug was not approved by FDA. Based on the trials cited in WHO-TRS, we further extracted the highest score for the proposed indications from ESMO-MCBS website. Cancer drug indications with an ESMO-MCBS score of A or B in the curative setting and of 4 or 5 in the non-curative setting were categorised as meeting the EML selection criterion. We categorised cancer drug indications as not meeting the criterion if (1) the cancer drug indications could not be found on the website, or (2) the trials cited by WHO-TRS were not used by EMSO-MCBS for score evaluation.
We assessed WHO listing decisions since 2015 with respect to evidence of OS benefit for the cancer drug indications as described in WHO-TRS. We also assessed 2019 and 2021 decisions with respect to evidence of magnitude of OS benefit >4 months (a median gain in OS benefit in the treatment arm of more than 4 months compared with that in the control arm) and ESMO-MCBS scores A or B (curative) or 4 or 5 (non-curative). Then we compared the availability of evidence of OS benefit extracted from WHO-TRS and pivotal trials (as obtained from FDA-approved labels). We noted if one source had documented evidence of OS benefit while the other did not. We then assessed the evidence of OS benefit for the same cancer drug indications which were applied more than once to examine whether new evidence was added in later applications. We further conducted a content analysis to assess how WHO-TRS communicated the evidence supporting listings, especially for those indications that did not have documented evidence of OS benefit. We also noted whether the rationales underlying WHO inclusion decisions were explicitly stated in the ‘Recommendations’ (2015) or ‘Committee recommendations’ (2017, 2019 and 2021) sections, and whether WHO provided a structured summary based on the selection criteria. We conducted descriptive analyses of cancer drug indication applications across the four most recent WHO EMLs. We further analysed the selection of targeted cancer drug indications in terms of OS benefit based on WHO-TRS and pivotal trials (as reported in FDA-approved drug labels).
Patients or the public were not involved in the design, conduct, reporting or dissemination plans of our research.
WHO EML cancer drug applications and decisions, 2015–2021 From 2015 to 2021, the WHO Expert Committee considered applications for 54 targeted cancer drug indications, of which 40.7% (n=22) were recommended for inclusion in the WHO EML . Clinical benefit of targeted therapy applications shows that among the 22 targeted cancer drug indications recommended for inclusion in the 2015–2021 EMLs, 68.2% (n=15) and 31.8% (n=7) had documented evidence of OS benefit in WHO-TRS or in pivotal trials, respectively. In addition to the criterion of OS benefit evidence in place for the 2015 EML, starting with the 2019 list, WHO defined a clinically meaningful OS benefit as at least a median of 4–6 months and ESMO-MCBS scores of A or B in the curative setting or 4 or 5 in the non-curative setting as EML selection criteria. Of 11 targeted cancer drug indications recommended for inclusion in the 2019 and 2021 EMLs, 54.5% (n=6) and 9.1% (n=1) had evidence of OS benefit >4 months in WHO-TRS and in pivotal trials, respectively ; 45.5% (n=5) met ESMO-MCBS criteria ; 18.2% (n=2) met both the OS benefit >4 months and the ESMO-MCBS criteria . Among those meeting the ESMO-MCBS criterion, only nivolumab for metastatic melanoma had a score of ‘A’ in the curative setting. Other indications met the criterion for the non-curative setting . 10.1136/bmjgh-2023-012899.supp1 Supplementary data For targeted cancer drug indications that were not recommended (n=23) in the 2019 and 2021 EMLs, we observed that 56.5% (n=13) and 52.2% (n=12) had documented evidence of OS benefit >4 months in WHO-TRS and in pivotal trials, respectively ; 78.3% (n=18) met the ESMO-MCBS score criterion ; 56.5% (n=13) met both the OS benefit >4 months and the ESMO-MCBS criteria . Evidence of OS benefit in application for targeted cancer drug indications from 2015 to 2021 is shown in . Ten targeted cancer drugs had more than one application for the same indications over several application cycles and five were eventually recommended for inclusion in the WHO EML ( and ). Among the recommended targeted cancer drug indications, only gefitinib for EGFR mutation-positive advanced non-small cell lung cancer (NSCLC) met the WHO EML OS benefit criterion. Compared with documentation in the 2015 WHO-TRS, new OS benefit evidence was provided for erlotinib for treatment of EGFR mutation-positive advanced NSCLC in 2019 WHO-TRS; however, OS benefit was less than 4 months. Repeated applications for the other recommended targeted cancer drug indications did not provide new evidence of OS benefit. For 13 targeted cancer drug indications, availability of evidence of OS benefit differed between WHO-TRS and pivotal trials (as reported in FDA-approved drug labels) . For 11 indications, evidence of OS benefit was only documented in WHO-TRS; for two indications, documented evidence of OS benefit was only found in pivotal trials. Conflicting OS benefit evidence was observed for four targeted cancer drug indications. Discrepancies were due to different evidence sources (different trials, meta-analysis vs trial, retrospective study vs trial, review vs trial) and treatment comparators. Seven targeted cancer drug indications without evidence of OS benefit were recommended for EML inclusion . Additional factors, such as reduced cost and increased treatment options, seemed to be more important than OS benefits or ESMO-MCBS scores in the selection for the WHO EMLs.
From 2015 to 2021, the WHO Expert Committee considered applications for 54 targeted cancer drug indications, of which 40.7% (n=22) were recommended for inclusion in the WHO EML .
shows that among the 22 targeted cancer drug indications recommended for inclusion in the 2015–2021 EMLs, 68.2% (n=15) and 31.8% (n=7) had documented evidence of OS benefit in WHO-TRS or in pivotal trials, respectively. In addition to the criterion of OS benefit evidence in place for the 2015 EML, starting with the 2019 list, WHO defined a clinically meaningful OS benefit as at least a median of 4–6 months and ESMO-MCBS scores of A or B in the curative setting or 4 or 5 in the non-curative setting as EML selection criteria. Of 11 targeted cancer drug indications recommended for inclusion in the 2019 and 2021 EMLs, 54.5% (n=6) and 9.1% (n=1) had evidence of OS benefit >4 months in WHO-TRS and in pivotal trials, respectively ; 45.5% (n=5) met ESMO-MCBS criteria ; 18.2% (n=2) met both the OS benefit >4 months and the ESMO-MCBS criteria . Among those meeting the ESMO-MCBS criterion, only nivolumab for metastatic melanoma had a score of ‘A’ in the curative setting. Other indications met the criterion for the non-curative setting . 10.1136/bmjgh-2023-012899.supp1 Supplementary data For targeted cancer drug indications that were not recommended (n=23) in the 2019 and 2021 EMLs, we observed that 56.5% (n=13) and 52.2% (n=12) had documented evidence of OS benefit >4 months in WHO-TRS and in pivotal trials, respectively ; 78.3% (n=18) met the ESMO-MCBS score criterion ; 56.5% (n=13) met both the OS benefit >4 months and the ESMO-MCBS criteria . Evidence of OS benefit in application for targeted cancer drug indications from 2015 to 2021 is shown in . Ten targeted cancer drugs had more than one application for the same indications over several application cycles and five were eventually recommended for inclusion in the WHO EML ( and ). Among the recommended targeted cancer drug indications, only gefitinib for EGFR mutation-positive advanced non-small cell lung cancer (NSCLC) met the WHO EML OS benefit criterion. Compared with documentation in the 2015 WHO-TRS, new OS benefit evidence was provided for erlotinib for treatment of EGFR mutation-positive advanced NSCLC in 2019 WHO-TRS; however, OS benefit was less than 4 months. Repeated applications for the other recommended targeted cancer drug indications did not provide new evidence of OS benefit. For 13 targeted cancer drug indications, availability of evidence of OS benefit differed between WHO-TRS and pivotal trials (as reported in FDA-approved drug labels) . For 11 indications, evidence of OS benefit was only documented in WHO-TRS; for two indications, documented evidence of OS benefit was only found in pivotal trials. Conflicting OS benefit evidence was observed for four targeted cancer drug indications. Discrepancies were due to different evidence sources (different trials, meta-analysis vs trial, retrospective study vs trial, review vs trial) and treatment comparators. Seven targeted cancer drug indications without evidence of OS benefit were recommended for EML inclusion . Additional factors, such as reduced cost and increased treatment options, seemed to be more important than OS benefits or ESMO-MCBS scores in the selection for the WHO EMLs.
We find that across the four most recent WHO EMLs, about one-third of the recommended targeted cancer drug indications lacked the evidence of OS benefit, as indicated by WHO-TRS not reporting or reporting non-significant OS data. The proportion increased to two-thirds when based on OS benefit evidence available in pivotal trials underlying FDA drug approvals alone. Our results point to inconsistencies in the WHO selection of essential cancer drugs against a desired clinical benefit criterion defined as OS benefit. We also report discrepancies between OS benefit results documented in WHO-TRS and pivotal trials documented in FDA-approved labels. Selection of cancer drug indications for the WHO EML is complex. In addition to clinical efficacy, the EML Committee is tasked with considering non-clinical factors including burden of disease, safety, availability of alternative treatment options and cost (both to the health system and individual patients). Of concern are potential barriers to access to and affordability of essential cancer drugs recommended in the WHO EML. Arguably, access and affordability are only relevant considerations for WHO EML cancer drugs with established clinical benefit, and most importantly, OS benefit. In recent years, the WHO has put greater emphasis on the development and use of explicit clinical benefit criteria to inform the selection of cancer drugs for the EML. Indeed, WHO has regarded OS benefit as one of the fundamental criteria for essential cancer medicine selection since 2015. In 2018, WHO identified a threshold for OS benefit of at least 4–6 months for all cancer drug indications under consideration. During our study period, we observed that the OS benefit criterion was implemented inconsistently. Across the 2015–2021 EMLs, 15/22 listed targeted cancer drug indications had evidence of OS. Of 11 targeted cancer drug indications recommended for inclusion in the 2019 and 2021 EMLs, WHO-TRS reported evidence of median OS benefit >4 months for six. Of five cancer drug indications that sought inclusion in the WHO EML more than once, three were subsequently recommended without new data on OS benefit. A relatively high proportion of drug indications with OS benefit >4 months were not recommended. Similarly, all the not-recommended cancer drug indications met the ESMO-MCBS criterion. Our findings suggest that OS benefit and ESMO-MCBS scores may not always be the primary factor in the decision-making process for EML drug selection. For some cancer drug indications without documented evidence of OS benefit or not meeting ESMO-MCBS score criteria, WHO appears to have placed more emphasis on factors other than clinical benefit for inclusion in the EML. As the USA leads the world in new drug research and development and is primarily the first market to launch new cancer drugs, many LMICs rely on a drug’s FDA approval status to inform its use in their populations. In addition, the clinical trials considered by the FDA are often the only studies available evaluating the efficacy of new cancer drugs. We compared the documented evidence of OS benefit between WHO-TRS and pivotal trials reported in FDA labels and found that benefit evidence differed. The proportion of cancer drug indications recommended without documented OS benefit evidence was higher when based on pivotal trial evidence in FDA-approved labels compared with evidence documented in WHO-TRS. These differences were primarily attributable to different sources of OS benefit evidence documented in WHO-TRS and FDA-approved labels. WHO-TRS includes OS benefit information from a wider range of sources, including trials, reviews and retrospective studies, while pivotal clinical trials form the basis of OS benefit evidence in FDA-approved labels. Although the WHO may include follow-up studies that were not included in FDA labels, our findings based on ESMO-MCBS also showed that more than half of the cancer drug indications recommended in 2019 and 2021 lacked ‘clinical meaningful benefit’. In 2018, WHO proposed that availability of evidence from clinical trials, especially high-quality randomised controlled trials, was an important consideration in cancer drug selection decisions. However, our finding highlights opportunities for greater adherence to this important recommended standard for recommending cancer drugs and the need to further formulate standards for evidence sources of OS benefits used for EML cancer drug selection. There are important opportunities for more effectively communicating the evidence to support EML selections, as well as the Committee’s rationales for decisions. First, we suggest a more structured and comprehensive reporting of evidence that WHO assembles for EML listing decisions. Research has shown that structured formats for presenting clinical trial information can improve understanding and comprehension of end users. In terms of efficacy, a tabular reporting format may include (a) the source of OS benefit information (ie, whether it was obtained from a randomised controlled trial, meta-analysis of multiple randomised controlled trials, or retrospective analyses), (b) the quality of OS benefit information (ie, risk of trial bias), (c) availability of evidence of OS benefit (yes/no), (d) magnitude of OS benefit ≥4 months (yes/no) and (e) characteristics of populations in which OS benefit was documented. WHO may also more clearly label the cancer drugs without evidence of documented OS benefit at the time of listing to inform decision-makers. Second, the WHO selection committee may make its decision rationales more accessible by consistently reporting whether decisions were driven by (a) clinical efficacy evidence, (b) comparative safety profiles, (c) expected ease of drug administration and/or (d) cost considerations for LMICs, or other factors. Our study has several limitations. First, we evaluated documentation of OS benefit evidence in WHO-TRS and FDA-approved labels and did not evaluate the quality of the evidence. WHO also adopted, starting with the 2019 EML, criteria for quality of cancer drug trials. Since quality of cancer drug trials varies, and poor quality trials may overestimate OS benefit of cancer drugs, we may have overestimated adherence of EML selections to the most recent selection criteria. Second, no additional published evidence, such as follow-up studies, was included. This would have been particularly interesting in cases where the study endpoint median OS was not reached. However, the focus of the study was to examine the clinical benefit of cancer drug indications at the time of EML selection. Third, we retrieved ESMO-MCBS scores based on the trials cited in WHO-TRS documents which were also used for evaluation by ESMO. This may underestimate the clinical benefit of the drug indications. Fourth, we do not address public health relevance and safety which depend on local circumstances. Finally, we only focus on WHO EML cancer drugs for adults. Further studies should also evaluate selection of cancer drugs for the WHO EML for children.
In conclusion, the WHO EML is designed to support health system decision-makers, particularly in resource-limited settings, in prioritising medicines for regulatory approval, procurement and financing. Since 2015, more targeted cancer drugs have been recommended for inclusion in the WHO EML. Given limited evidence of clinical benefit of new targeted cancer drugs, WHO laudably defined criteria for clinical benefit evidence for cancer drug inclusion in the EML. Our findings highlight opportunities for improving application of these desirable criteria and for better documenting the evidence considered and rationales for WHO EML selection decisions.
|
Mixed-methods evaluation of family medicine research training and peer mentorship in Lesotho | c2e48a55-1463-4e39-941c-366400547975 | 7669944 | Family Medicine[mh] | Engaging in health research is important for identifying and prioritising the health needs of a population. The lack of health research capacity in low- and middle-income countries (LMICs) hinders their ability to respond to the needs of their communities. This is especially true for primary care research in LMICs. Health research capacity building, or strengthening, is defined by Lansang et al. as the ‘ongoing process of empowering individuals, institutions, organizations and nations to define and prioritise problems systematically, develop and scientifically evaluate appropriate solutions, and share and apply the knowledge generated’. , Family medicine postgraduate training programmes in sub-Saharan Africa have recognised the importance of research training as a necessary part of their curricula. The completion of a scholarly project is a common requirement for graduation from these programmes. , , Physicians who are equipped with skills to generate hypotheses, design methods to test them and analyse collected data can enact change in their communities. Requiring family medicine trainees to conduct research is a promising strategy to increase needed primary care research capacity. The completion of the research requirement, however, can also create significant bottlenecks in training and even become a negative experience for family medicine trainees. , Such difficulties occur if trainees are not provided with sufficient research training and mentorship to complete the required project. Nevertheless, according to findings from a meeting report by Yakubu et al., it is possible to move family medicine research engagement beyond a ‘mere obligation’ required for graduation by making the experience enjoyable, developing internal motivation of the trainees and providing sufficient mentorship. There is a growing literature recognising the crucial role that mentorship plays in research capacity building. Specifically, peer and ‘near-peer’ mentorship can enhance learning and motivation, providing an avenue for open communication outside of the typical medical education hierarchy. ‘Near-peer’ mentorship occurs between individuals who are typically one level of training different from one another, for example, between undergraduate and graduate students. There have been evaluations of peer and near-peer mentorship programmes for pharmacy student researchers, geriatric post-doctoral fellows and oncology clinician researchers. Common amongst these studies has been the success of the peer and near-peer model to fill a resource gap in available research mentorship for trainees. , , The authors were unable to identify any literature evaluating the use of peer or near-peer mentors within family medicine postgraduate research training in sub-Saharan Africa. Typical measures to assess the success of research capacity building programmes include a number of peer-reviewed publications, presentations at conferences, grant applications and attainment of higher degrees; however, these are challenging to obtain in low-resource and early-career settings. In addition, these metrics fail to capture a primary goal of health research capacity building, which is to impact the health of and service delivery to the local community. , Cooke’s framework for evaluating research capacity building focuses on ‘process’ domains that are relevant to novice researchers and can be measured more proximally. It also focuses on ‘outcome’ domains that consider the community health impact of research, which are especially relevant to family medicine. These domains include: (1) building skills and confidence, (2) ensuring that research is ‘close to practice’, (3) supporting linkages and collaborations, (4) developing appropriate dissemination, (5) building sustainability and continuity and (6) investing in infrastructure. We, therefore, utilise Cooke’s framework to evaluate the family medicine training programme’s efforts to build research capacity through its research curriculum and a novel peer mentorship programme. This article describes the research curriculum, the peer mentorship programme and its evaluation.
We conducted a longitudinal mixed-methods evaluation of the research training component of the Lesotho-Boston Health Alliance (LeBoHA) Family Medicine Specialty Training Programme (FMSTP) curriculum. The curriculum uses peer research mentors from the United States (US) to support FMSTP postgraduate trainees remotely as they complete a required research project. The primary goal of the curriculum and mentorship programme is to build research capacity amongst the FMSTP trainees. The specific objectives of this evaluation were to: (1) understand the impact of these efforts on trainee’s research capacity, (2) evaluate the use of peer mentorship to support FMSTP research training and (3) generate insights to improve the quality of research training and mentorship in Lesotho and elsewhere. Setting Lesotho is a small landlocked country within South Africa, which faces the highest rates of human immunodeficiency virus and tuberculosis in the world. Approximately half of Lesotho’s two million people, called Basotho, live on less than 1.90 USD per day. Lesotho does not have a medical school and the FMSTP is the first and only accredited postgraduate medical education programme in Lesotho. The FMSTP is an academic partnership with the Lesotho Ministry of Health and the LeBoHA. Over the four-year programme, trainees are educated in clinical family medicine, public health and district health management, including a required scholarly research project in a primary care topic relevant to their community. Authors’ relationship to topic Grounding our methods in reflexivity, the following is a brief explanation of the authors’ role in both the implementation and evaluation of the FMSTP research curriculum and mentorship programme. The first author, C.M., is a family physician and currently the research director of the FMSTP. She developed and implemented the peer mentorship structure, including recruitment and matching of peers, and serves as peer mentor herself. The fourth author, B.J., is a family physician and the director of LeBoHA. He and C.M. are responsible for teaching the majority of the FMSTP research curriculum. B.J. also provides senior faculty-level research mentorship to all FMSTP trainees. Second author, K.R., joined the evaluation as part of her Master of Public Health training. K.R. became a peer mentor to one of the current FMSTP trainees in April 2019. Third author, S.M., is the first family medicine graduate of the FMSTP and is now its director. He provides faculty research supervision to the trainees. Last author, C.B., is a global mental health researcher who has visited the FMSTP programme, but does not play a direct role in the FMSTP curriculum nor mentorship. Co-authors S.M. and B.J. participated as faculty members in the semi-structured interviews that were included in this analysis. None of the other co-authors participated as subjects in the evaluation. Research curriculum The overall FMSTP curriculum is taught via monthly in-person week-long ‘contact sessions’, coupled with intermittent remote training and supervision visits to trainees in the district hospitals where they are employed. In recent years, FMSTP training increasingly involves blended learning strategies that use online resources and instruction to complement face-to-face instruction. The curriculum includes separate components on community-oriented primary care (COPC), quality improvement and research. The four second-year and four third-year trainees started the research curriculum together during the March 2017 contact session. This session introduced basic research theory and design fundamentals. Participants learnt to create their own problem statement, research question, select a methodology and to search online databases. Following this, each trainee had in-person supervision visits focused on literature review, stakeholder engagement and advancing their individual proposals. The next contact session included a dedicated session on research ethics and rolled out the research mentorship structure (see below). All remaining sessions were taught remotely using one-hour virtual webinars targeted to occur before specific next steps in the research process. For example, a remote Institutional Review Board (IRB) protocol development webinar occurred before the trainees started writing their research proposals. A research data management webinar was held prior to the majority of trainees started their data collection. Additional research training occurred via feedback and discussion between trainees and mentors during the process of creating and implementing their research protocols. See for an overview of the curriculum timeline. Research mentorship To support the research training curriculum, a peer research mentorship programme, which paired the US and FMSTP post-graduate trainees, was developed and implemented in April 2017. Peers were recruited via an email to the Boston University Family Medicine residency listserv and via personal invitation. Although there was no formal requirement for level of research expertise to become a peer mentor, all had previous experience in conducting research. The mentorship interactions occurred primarily via email and social media platforms, such as WhatsApp. The US peer mentor assisted the Lesotho trainee throughout the research process. This included brainstorming research ideas, identifying relevant literature, editing drafts of proposals, presentations, posters and manuscripts and generally learning research skills together. Peers were expected to communicate regularly with trainees, at least once every six weeks. Trainees and peers had the support of senior faculty research mentorship, provided by B.J., who was available for questions and feedback. All trainees were additionally assigned a Lesotho-based faculty research supervisor. Lesotho faculty have limited prior research experience, and thus, the role of these supervisors was primarily to assist with on-the-ground troubleshooting and support trainee’s progress within the context of the FMSTP training programme. Evaluation participant recruitment This evaluation firstly included all eight FMSTP trainees who were engaged in the research training curriculum and all Lesotho-based FMSTP faculty. As the evaluation continued, all FMSTP administrators, Boston-based FMSTP faculty and all US peer mentors, apart from author C.M., were also invited to participate. This resulted in a convenience sample of participants from each of these categories who were available and willing to offer feedback at each evaluation time point over a two-year period. Tool development The evaluations were conducted using three main tools. Firstly, the FMSTP programme already uses a trainee self-evaluation tool on which the trainees rank their confidence in various curricular domains via a simple five-point Likert, with a 1 representing ‘not confident’ and a 5 representing ‘very confident’. This was adapted to assess confidence in the seven key research skills the curriculum was designed to teach: (1) choose a research question, (2) conduct literature review, (3) design study, (4) collect data, (5) analyse data, (6) write results and (7) present research. The trainee self-evaluation tool results were not anonymous, as they were available to faculty and administrators of the programme. Secondly, a general FMSTP programme feedback form was adapted into a simple short-answer survey, asking the respondent to comment on: (1) their overall impression of the peer mentorship programme, (2) programme strengths, (3) current or anticipated challenges and (4) ideas for improvement. This programme feedback tool was anonymous. Both tools were piloted with a trainee, who provided feedback on question clarity and understandability prior to administration (see and ). Thirdly, evaluators developed a question guide to facilitate semi-structured interviews. These guides were designed to generate information that would expand upon and triangulate with survey data. The guides specifically sought to (1) understand the impact of the FMSTP research training and mentorship approach on trainee research capacity, (2) explore the experience of peer research mentorship for the trainees, peers, faculty and programme administrators and (3) identify future directions for improving quality of research training and mentorship (see ). Data collection This two-year evaluation invited participation from 20 individuals in total, including trainees, faculty, administrators and peers, with variable participation at each time point. Trainee self-evaluation surveys were collected from all trainees at three time points: April 2017 (pre-programme, T1), August 2017 (early midpoint, T2) and January 2019 (late midpoint, T3). Post-programme (T4) survey data were collected in July 2019 from the graduates only. Surveys were administered in paper format, while C.M. was in-country during the T1 and T2 evaluations. Subsequent surveys were then administered via Qualtrics online survey software (Qualtrics, Provo, UT). K.R. conducted the majority of interviews. These were conducted both in-person and via web-conferencing software (Zoom) during January–March 2019. See for evaluation time points. Those who could not participate in interviews were given the option to email responses to the interview questions. Demographics were collected via programme logs, verbally at the time of interview or via email. Quantitative data management and analysis Paper surveys were entered into Microsoft Excel and combined with exported Qualtrics survey data. Descriptive statistics of participant demographic data, such as means, medians, standard deviations and proportions, were calculated using RStudio (Version 1.2.1335). Programme faculty and administrator demographics were analysed and are reported in aggregate to help protect participant anonymity. Likert scores across all seven measured domains were averaged to create a single ‘survey-scale’ research confidence score for each trainee. Changes in research confidence scores were analysed and compared using medians and ranges. Given T4 data were available only for the graduates, these four individuals were also analysed independently. The non-parametric Friedman test was used to assess significance of changes in Likert-scale confidence scores in all seven research domains for all trainees combined and for graduates alone, across their respective time points. The sign test was then used to assess if the change was positive or negative when comparing T1 to T3 for all trainees and comparing T1 to T4 for graduates. Qualitative data management and analysis All interviews were audio-recorded, transcribed and coded with the support of NVivo (Version 12.6.0). We used a semi-verbatim transcription style that allowed for removal of false starts and interviewer prompts to improve clarity, but preserved all content of text. All written survey data were entered into a table and also edited for spelling and punctuation. Interview transcripts and written data were analysed through the use of thematic analysis using a mixed inductive–deductive approach. Analysis was both data-driven and theory-driven, using questions from the interview guide as well as Cooke’s framework to develop an initial a priori codebook. This framework was selected prior to development of the codebook as a means of ensuring that our analysis included a review of each core domain of research capacity building. Both K.R. and C.M., initially familiarised themselves with the data through a quality assurance phase, in which all transcripts were checked against the original audio. K.R. and C.M. conducted open coding of two interviews of different participant types and the resultant concepts were combined with the a priori codes to create an initial codebook. This codebook was then used by both researchers to code four additional interviews, with review and collaborative modification of the codebook after each. The codebook was shared with S.M. in Lesotho for feedback. A fifth interview and a randomly selected set of written interview responses were coded with this pre-final codebook and given no additional concepts from the data were identified, and the codebook was finalised. After finalising the codebook, one interview was independently coded and inter-coder agreement was calculated to be 97.3%. This final codebook was then used by C.M. and K.R. to independently code all subsequent interviews and written responses, and to recode the initial interviews that were used to develop the codebook. After all data were coded in NVivo, both K.R. and C.M. reviewed coded segments independently to identify themes via relationships between codes and to Cooke’s framework domains. These themes were then reviewed collaboratively together and in a meeting with S.M. to both finalise the analysis approach and begin to define the core themes to explore. K.R. and C.M. then collaboratively mapped themes onto Cooke’s framework and iteratively defined, and then named those that fell outside of the framework but were prominent in the collected data. Memoing was used extensively throughout the transcription, quality assurance, familiarisation, coding and analysis process to encourage reflexivity, especially regarding the ways in which the evaluators’ role in the delivery of the FMSTP curriculum and mentorship may influence its evaluation. In addition, we made use of member checking of our preliminary analysis to provide all participants the opportunity to comment on the appropriateness of data interpretation and representation of the programme. A table containing preliminary findings and exemplary quotes was shared with all invited participants and a three-week period was allowed for comments. During this period, 11 participants responded with feedback and all approved the results. Finally, the analysis methods above describe primarily the summative data analysis process. In addition, shortly after each survey evaluation time point, self-evaluation results were shared with the FMSTP faculty, trainees and their research mentors to allow for discussion and action on the identified challenges. This iterative quality improvement approach allowed for real-time adjustments throughout the programme. Ethical consideration Ethical approval was obtained from the Boston University Medical Campus IRB and the Lesotho Ministry of Health Research Ethics Committee. All written surveys included a research opt-out clause and verbal informed consent was obtained for each interview participant. Verbal consent was chosen to allow for ease of conducting both in-person and web-conference interviews and because of the low-risk nature of the study. Ethical clearance numbers: Lesotho Ministry of Health Research Ethics Committee Number: ID89-2017. BU IRB Number: H-36847
Lesotho is a small landlocked country within South Africa, which faces the highest rates of human immunodeficiency virus and tuberculosis in the world. Approximately half of Lesotho’s two million people, called Basotho, live on less than 1.90 USD per day. Lesotho does not have a medical school and the FMSTP is the first and only accredited postgraduate medical education programme in Lesotho. The FMSTP is an academic partnership with the Lesotho Ministry of Health and the LeBoHA. Over the four-year programme, trainees are educated in clinical family medicine, public health and district health management, including a required scholarly research project in a primary care topic relevant to their community.
Grounding our methods in reflexivity, the following is a brief explanation of the authors’ role in both the implementation and evaluation of the FMSTP research curriculum and mentorship programme. The first author, C.M., is a family physician and currently the research director of the FMSTP. She developed and implemented the peer mentorship structure, including recruitment and matching of peers, and serves as peer mentor herself. The fourth author, B.J., is a family physician and the director of LeBoHA. He and C.M. are responsible for teaching the majority of the FMSTP research curriculum. B.J. also provides senior faculty-level research mentorship to all FMSTP trainees. Second author, K.R., joined the evaluation as part of her Master of Public Health training. K.R. became a peer mentor to one of the current FMSTP trainees in April 2019. Third author, S.M., is the first family medicine graduate of the FMSTP and is now its director. He provides faculty research supervision to the trainees. Last author, C.B., is a global mental health researcher who has visited the FMSTP programme, but does not play a direct role in the FMSTP curriculum nor mentorship. Co-authors S.M. and B.J. participated as faculty members in the semi-structured interviews that were included in this analysis. None of the other co-authors participated as subjects in the evaluation.
The overall FMSTP curriculum is taught via monthly in-person week-long ‘contact sessions’, coupled with intermittent remote training and supervision visits to trainees in the district hospitals where they are employed. In recent years, FMSTP training increasingly involves blended learning strategies that use online resources and instruction to complement face-to-face instruction. The curriculum includes separate components on community-oriented primary care (COPC), quality improvement and research. The four second-year and four third-year trainees started the research curriculum together during the March 2017 contact session. This session introduced basic research theory and design fundamentals. Participants learnt to create their own problem statement, research question, select a methodology and to search online databases. Following this, each trainee had in-person supervision visits focused on literature review, stakeholder engagement and advancing their individual proposals. The next contact session included a dedicated session on research ethics and rolled out the research mentorship structure (see below). All remaining sessions were taught remotely using one-hour virtual webinars targeted to occur before specific next steps in the research process. For example, a remote Institutional Review Board (IRB) protocol development webinar occurred before the trainees started writing their research proposals. A research data management webinar was held prior to the majority of trainees started their data collection. Additional research training occurred via feedback and discussion between trainees and mentors during the process of creating and implementing their research protocols. See for an overview of the curriculum timeline.
To support the research training curriculum, a peer research mentorship programme, which paired the US and FMSTP post-graduate trainees, was developed and implemented in April 2017. Peers were recruited via an email to the Boston University Family Medicine residency listserv and via personal invitation. Although there was no formal requirement for level of research expertise to become a peer mentor, all had previous experience in conducting research. The mentorship interactions occurred primarily via email and social media platforms, such as WhatsApp. The US peer mentor assisted the Lesotho trainee throughout the research process. This included brainstorming research ideas, identifying relevant literature, editing drafts of proposals, presentations, posters and manuscripts and generally learning research skills together. Peers were expected to communicate regularly with trainees, at least once every six weeks. Trainees and peers had the support of senior faculty research mentorship, provided by B.J., who was available for questions and feedback. All trainees were additionally assigned a Lesotho-based faculty research supervisor. Lesotho faculty have limited prior research experience, and thus, the role of these supervisors was primarily to assist with on-the-ground troubleshooting and support trainee’s progress within the context of the FMSTP training programme.
This evaluation firstly included all eight FMSTP trainees who were engaged in the research training curriculum and all Lesotho-based FMSTP faculty. As the evaluation continued, all FMSTP administrators, Boston-based FMSTP faculty and all US peer mentors, apart from author C.M., were also invited to participate. This resulted in a convenience sample of participants from each of these categories who were available and willing to offer feedback at each evaluation time point over a two-year period.
The evaluations were conducted using three main tools. Firstly, the FMSTP programme already uses a trainee self-evaluation tool on which the trainees rank their confidence in various curricular domains via a simple five-point Likert, with a 1 representing ‘not confident’ and a 5 representing ‘very confident’. This was adapted to assess confidence in the seven key research skills the curriculum was designed to teach: (1) choose a research question, (2) conduct literature review, (3) design study, (4) collect data, (5) analyse data, (6) write results and (7) present research. The trainee self-evaluation tool results were not anonymous, as they were available to faculty and administrators of the programme. Secondly, a general FMSTP programme feedback form was adapted into a simple short-answer survey, asking the respondent to comment on: (1) their overall impression of the peer mentorship programme, (2) programme strengths, (3) current or anticipated challenges and (4) ideas for improvement. This programme feedback tool was anonymous. Both tools were piloted with a trainee, who provided feedback on question clarity and understandability prior to administration (see and ). Thirdly, evaluators developed a question guide to facilitate semi-structured interviews. These guides were designed to generate information that would expand upon and triangulate with survey data. The guides specifically sought to (1) understand the impact of the FMSTP research training and mentorship approach on trainee research capacity, (2) explore the experience of peer research mentorship for the trainees, peers, faculty and programme administrators and (3) identify future directions for improving quality of research training and mentorship (see ).
This two-year evaluation invited participation from 20 individuals in total, including trainees, faculty, administrators and peers, with variable participation at each time point. Trainee self-evaluation surveys were collected from all trainees at three time points: April 2017 (pre-programme, T1), August 2017 (early midpoint, T2) and January 2019 (late midpoint, T3). Post-programme (T4) survey data were collected in July 2019 from the graduates only. Surveys were administered in paper format, while C.M. was in-country during the T1 and T2 evaluations. Subsequent surveys were then administered via Qualtrics online survey software (Qualtrics, Provo, UT). K.R. conducted the majority of interviews. These were conducted both in-person and via web-conferencing software (Zoom) during January–March 2019. See for evaluation time points. Those who could not participate in interviews were given the option to email responses to the interview questions. Demographics were collected via programme logs, verbally at the time of interview or via email.
Paper surveys were entered into Microsoft Excel and combined with exported Qualtrics survey data. Descriptive statistics of participant demographic data, such as means, medians, standard deviations and proportions, were calculated using RStudio (Version 1.2.1335). Programme faculty and administrator demographics were analysed and are reported in aggregate to help protect participant anonymity. Likert scores across all seven measured domains were averaged to create a single ‘survey-scale’ research confidence score for each trainee. Changes in research confidence scores were analysed and compared using medians and ranges. Given T4 data were available only for the graduates, these four individuals were also analysed independently. The non-parametric Friedman test was used to assess significance of changes in Likert-scale confidence scores in all seven research domains for all trainees combined and for graduates alone, across their respective time points. The sign test was then used to assess if the change was positive or negative when comparing T1 to T3 for all trainees and comparing T1 to T4 for graduates.
All interviews were audio-recorded, transcribed and coded with the support of NVivo (Version 12.6.0). We used a semi-verbatim transcription style that allowed for removal of false starts and interviewer prompts to improve clarity, but preserved all content of text. All written survey data were entered into a table and also edited for spelling and punctuation. Interview transcripts and written data were analysed through the use of thematic analysis using a mixed inductive–deductive approach. Analysis was both data-driven and theory-driven, using questions from the interview guide as well as Cooke’s framework to develop an initial a priori codebook. This framework was selected prior to development of the codebook as a means of ensuring that our analysis included a review of each core domain of research capacity building. Both K.R. and C.M., initially familiarised themselves with the data through a quality assurance phase, in which all transcripts were checked against the original audio. K.R. and C.M. conducted open coding of two interviews of different participant types and the resultant concepts were combined with the a priori codes to create an initial codebook. This codebook was then used by both researchers to code four additional interviews, with review and collaborative modification of the codebook after each. The codebook was shared with S.M. in Lesotho for feedback. A fifth interview and a randomly selected set of written interview responses were coded with this pre-final codebook and given no additional concepts from the data were identified, and the codebook was finalised. After finalising the codebook, one interview was independently coded and inter-coder agreement was calculated to be 97.3%. This final codebook was then used by C.M. and K.R. to independently code all subsequent interviews and written responses, and to recode the initial interviews that were used to develop the codebook. After all data were coded in NVivo, both K.R. and C.M. reviewed coded segments independently to identify themes via relationships between codes and to Cooke’s framework domains. These themes were then reviewed collaboratively together and in a meeting with S.M. to both finalise the analysis approach and begin to define the core themes to explore. K.R. and C.M. then collaboratively mapped themes onto Cooke’s framework and iteratively defined, and then named those that fell outside of the framework but were prominent in the collected data. Memoing was used extensively throughout the transcription, quality assurance, familiarisation, coding and analysis process to encourage reflexivity, especially regarding the ways in which the evaluators’ role in the delivery of the FMSTP curriculum and mentorship may influence its evaluation. In addition, we made use of member checking of our preliminary analysis to provide all participants the opportunity to comment on the appropriateness of data interpretation and representation of the programme. A table containing preliminary findings and exemplary quotes was shared with all invited participants and a three-week period was allowed for comments. During this period, 11 participants responded with feedback and all approved the results. Finally, the analysis methods above describe primarily the summative data analysis process. In addition, shortly after each survey evaluation time point, self-evaluation results were shared with the FMSTP faculty, trainees and their research mentors to allow for discussion and action on the identified challenges. This iterative quality improvement approach allowed for real-time adjustments throughout the programme.
Ethical approval was obtained from the Boston University Medical Campus IRB and the Lesotho Ministry of Health Research Ethics Committee. All written surveys included a research opt-out clause and verbal informed consent was obtained for each interview participant. Verbal consent was chosen to allow for ease of conducting both in-person and web-conference interviews and because of the low-risk nature of the study. Ethical clearance numbers: Lesotho Ministry of Health Research Ethics Committee Number: ID89-2017. BU IRB Number: H-36847
Characteristics of our sample The demographics of all 20 invited participants are summarised in . Invited trainees had a 100% response rate at all evaluation time points. Six of the eight trainees (75%) participated in semi-structured interviews. Across both class years, half of the trainees were female and their average age was 36.4 years old. All trainees were Basotho. Faculty participation varied over the two-year period because of the retirement and hiring of new faculty. At least two Lesotho-based faculty members completed an anonymous written programme evaluation at each time point. Four faculty participated in semi-structured interviews and one provided written interview answers via email. Three administrators were invited to participate in the programme evaluations at the T3 and T4 time points and one administrator completed an interview. Programme faculty and administrators were mostly male and included Basotho, American and German nationalities. All three peers invited to participate in the evaluation are female and are from the United States. Of these, only one peer provided an emailed interview response and two provided survey responses. Quantitative results When evaluating trends in overall research confidence scores amongst all trainees ( n = 8), we saw an increase in the median score of 1.7 (range 1.0–2.9) at T1 to a median of 2.6 (range 2.1–3.1) at T2. There was no further increase at T3, however, with the median score remaining the same at 2.6 points on a five-point scale. A Friedman test of differences amongst repeated measures conducted on the raw Likert scores for all trainees across these three time points was statistically significant ( X 2 = 37.33, p < 0.001). A two-sided sign test conducted on trainees’ T1 compared with T3 raw Likert scores showed a positive median increase of one point. shows the change in graduates’ ( n = 4) research confidence scores over all four time points. As an aggregate, we saw a 1.8 point rise in median research confidence scores between T1 (1.3, range 1.0–2.9) and T4 (3.1, range 2.4–4.3), although gains varied by individual graduate. A Friedman test on the raw Likert scores of the graduates across all four time points was also statistically significant ( X 2 = 34.48, p < 0.001). A two-sided sign test conducted on graduates’ T1 as compared with T4 raw Likert scores showed a positive median increase of two points. Qualitative results Ten semi-structured interviews, two written interview answers and all 36 open-ended survey responses were analysed. Results were organised by the six domains of Cooke’s framework. Where appropriate, distinction is made between the individual, organisational and supra-organisational levels at which these domains operate. We share additional findings specific to peer mentorship that emerged from the data. Representative quotes are identified using the numerical code of the participant. Trainees who began the research curriculum of the FMSTP as a second year have codes starting with two, whilst trainees who began the research curriculum as a third year have codes starting with three. See for a summary of results. Skills and confidence building Trainees built research skills and confidence primarily through the practical experience of conducting a mentored research project. This intentional ‘learning-by-doing’ approach placed research mentorship in a central role to support trainees’ research skill development: ‘Unless you’re using that information (…) it doesn’t mean anything, you have to apply it to a project and have a mentor working with you on the project for that to make sense.’ (Faculty 191, Male) Although research skills increased, confidence in using those skills, for example, to go on and be a peer mentor themselves, was limited for most trainees. Just one trainee expressed that he or she would feel confident being a peer mentor, but others were hesitant, not feeling ready yet: ‘From the experience that I have now, I think I can be able to help people and show them how to find information.’ (Trainee 342, Female) ‘At the moment I can’t be a mentor because I still need to be mentored.’ (Trainee 343, Female) In addition, a highly applied research training curriculum can also result in trainees developing skills that relate to their own project but lack a broader familiarly with other research types. This resulted in many trainees feeling limited in their self-efficacy of overall research abilities: ‘I can’t appreciate lots of things with research. I just know what I’m doing.’ (Trainee 343, Female) Close to practice Trainees consistently made statements demonstrating their understanding that research has the potential to greatly impact local practice and that community-based research is an important vehicle for physicians to learn about their community: ‘Up until you have done the research, met the community, got the information from the community, it’s up until then that you get the reality of what’s going on (…) this has just opened my eyes.’ (Trainee 341, Male) To ensure that research does remain ‘close to practice’ research topic selection and questions must be defined locally. Trainee research should flow from the COPC portion of the FMSTP curriculum and must be ‘operational’, designed to solve a problem that exists in the community or institution: ‘The problems and the project come from them, not any external suggestion or anything from well-meaning peer mentors.’ (Faculty 112, Female) ‘Focusing on operational research that flows from the COPC is going to make it more useful (…) [ it will ] reinforce COPC which is one of the core aspects of family medicine in this region.’ (Administrator 142, Female) Linkages and collaboration The peer mentorship itself was an important collaborative experience that was built into the FMSTP approach to research training. Having peers come from abroad provided an additional ‘excitement’ factor: ‘It’s so exciting (…) to have those kind of relationships (…) ‘I have someone that I can talk to from abroad.’ (Administrator 142, Female) The programme also included an opportunity for trainees to present their research at the World Organization for Family Medicine’s (WONCA) Africa Regional Conference. This provided an opportunity for regional networking and idea exchange: ‘It was fulfilling to exchange ideas with people from different cultural backgrounds.’ (Trainee 231, Male) The findings highlight the importance of research mentorship to foster linkages that can be sustained throughout one’s career: ‘There’s an ongoing relationship after that, ideally. And not just with the mentor, but with other people who are doing similar things. You have a cohort of people who are then your peers in that research area (…) they have a long, lifetime often, relationship around the work they are doing.’ (Faculty 191, Male) A number of the trainees’ research projects engaged nurses to support with data collection, for example, to co-facilitate a focus group on barriers to cervical cancer screening. There was limited other discussion in the interviews, however, regarding local stakeholder engagement, inter-professional research collaborations or linkages with policy makers. Appropriate dissemination Trainees were asked on the written surveys about their goals for their research projects and many expressed a desire to disseminate their results both within Lesotho and internationally: ‘Want it to be published nationally and globally.’ (Trainee 233, Female) ‘To present it at [ a ] symposium, nationally and internationally.’ (Trainee 231, Male) Beyond these traditional means of dissemination via presentations and publications, trainees did also express goals that acknowledge research’s potential to impact local health practices: ‘Improve cervical cancer screening in my district because it really is preventable.’ (Trainee 341, Male) Lacking, however, were more specific plans for strategic local dissemination that would lead to changes in their clinical environments or health policy. Continuity and sustainability Trainees expressed an appreciation for the continuity of the mentorship relationship with their peers throughout the research training experience: ‘Very helpful and [ they ] guide us through every step.’ (Trainee, anonymous evaluation) Participants spoke about the importance of research for family medicine, seeing it as a way to improve practices and overall systems, and reveal what is going on in their communities. This, in addition to a number of trainees explicitly stating their desire to continue engaging in research, indicates that a positive research culture was fostered through the programme: ‘There’s so much more to know about my community.’ (Trainee 341, Male) ‘So family medicine is creating this culture of research in us, [ a ] culture of living and doing research.’ (Trainee 232, Male) Apart from the idea of involving the graduates as local near-peer research mentors, no other specific plans, funding or structure for continued involvement in primary care research beyond trainees’ FMSTP research project were discussed. Infrastructure A number of infrastructure-related challenges were noted at all levels of Cooke’s framework. Firstly, trainees did not have protected time to dedicate to their research projects and struggled with time and overall project management: ‘We are supposed to do the work on our own time, so I think that one was a bit difficult because sometimes we are overrun by the other jobs, so we ended up giving minimum time to research.’ (Trainee 342, Female) Also, at the individual level, a lack of funding to support the costs of conducting the research was noted: ‘They talk about issues of resources, even if it seems small, as being difficult. Like making copies, having toner, having pens, having adequate space and time to complete everything.’ (Faculty 132, Male) At the organisation level, there is a need for research capacity building among the Lesotho-based FMSTP faculty. This will improve their provision of academic supervision for the trainees and remove their dependence on Boston for this support: ‘Let us also make sure that our faculty members get the capacity they need, so that when there is no such programme, they can be able to play that part.’ (Administrator 142, Female) Additionally, there was a lack of comprehensive coordination of the research curriculum and mentorship programme. Many felt pressure in relation to the programme timeline and expressed that better organisation and coordination would have helped them feel more prepared: ‘I wish that the objectives could be very clear (…) so that everybody knows from the beginning what they are expected to do. And then we don’t get surprises in the middle of the programme.’ (Trainee 341, Male) The research mentorship occurred largely virtually, which was described as a flexible and effective way to have questions answered promptly: ‘Even when she left Lesotho to the States, she kept communication going on because she opened up a WhatsApp group (…) So, in a way, she’s always in Lesotho. Every time I say anything, [ she ] will just pop up and answer me, at any point, at any time.’ (Trainee 341, Male) In contrast, some trainees also expressed that virtual mentorship has limitations, and sometimes, in-person support would be preferred. National infrastructure challenges of having a poor network and the cost of data in Lesotho were also discussed: ‘We have just recently begun to have phone contact for each other, as it was sometimes a bit difficult with emails alone.’ (Trainee, anonymous evaluation) ‘Communication can be difficult over internet, data in Lesotho is expensive, and the network is bad many times.’ (Faculty, anonymous evaluation) In Lesotho, there is one national IRB through which all human subjects research protocols are evaluated. The approval process was lengthy for all trainees and resulted in delays in starting their projects. Additionally, there is only limited locally produced data to inform research studies: ‘The same studies might have been done abroad, or somewhere before, but the local data is not there.’ (Trainee 232, Male) Peer mentorship In addition to the themes found in relation to Cooke’s framework, another major theme relates to the use of peer mentorship as a unique approach to support research capacity building. Faculty, administrators and trainees alike noted the unique benefits to having a peer as part of the research mentorship process. Because of less hierarchy in a peer relationship, trainees feel open and are more comfortable asking questions. This was contrasted with supervisors, whom trainees often noted that they feel less comfortable with. Peers use informal communication methods that make them easily accessible and many times were referenced as a ‘friend’: ‘It’s easier to talk with [ a ] peer mentor than the supervisors. And they are informally available, on WhatsApp or Facebook.’ (Trainee 343, Female) ‘Lowers barriers to ask for assistance and it promotes motivation.’ (Faculty 122, Male) ‘My understanding is that a peer is someone whom we can be friends with (…) and is someone that I can freely learn from.’ (Administrator 142, Female) As peers have been through the training process recently and are themselves managing similar challenges, they are relatable. These peer relationships were described as motivating and good at providing ‘moral support’. A limitation to peer mentorship is that they may not have the time to be as responsive as desired because they are themselves in-training or busy early-career physicians. Additionally, there were concerns about the role of the peer and whenever they have sufficient expertise to provide valuable research mentorship: ‘She tries by all means to be quick at response, but certain times it might not be as quick as expected.’ (Trainee 232, Male) ‘Some US residents are not equipped or available enough to be good peer mentors.’ (Faculty, anonymous evaluation) ‘I wish I had more research knowledge to bring to the table.’ (Peer mentor 172, Female) Throughout the interviews, respondents enumerated specific characteristics of good peer mentors. These included being: (1) good communicators, including being adept listeners and prompt with responses, (2) dedicated and committed, (3) friendly and non-judgemental, (4) good at time management, and specifically being able to model and teach time management, (5) experienced in research and (6) familiar with local context of mentee. Triangulation of quantitative and qualitative results Moderate increases in research confidence among trainees were seen in both survey results and via the interviews. After completion of the programme, the four graduates’ median research confidence score of 3.1 represented a significant improvement from their pre-programme score; however, it still only reached moderate levels of confidence on the 5-point Likert scale. The interviews allow us to unpack these scores. Trainees expressed that while they do feel ‘a little’ confident, most do not yet feel sufficiently confident to do a research project independently, nor mentor another without additional support: ‘I feel a little confident. We did the protocol (…) and I feel okay, it’s doable, I can do it, maybe.’ (Trainee 343, Female) ‘Continue to get better at it until I am able to mentor someone else.’ (Trainee, anonymous evaluation)
The demographics of all 20 invited participants are summarised in . Invited trainees had a 100% response rate at all evaluation time points. Six of the eight trainees (75%) participated in semi-structured interviews. Across both class years, half of the trainees were female and their average age was 36.4 years old. All trainees were Basotho. Faculty participation varied over the two-year period because of the retirement and hiring of new faculty. At least two Lesotho-based faculty members completed an anonymous written programme evaluation at each time point. Four faculty participated in semi-structured interviews and one provided written interview answers via email. Three administrators were invited to participate in the programme evaluations at the T3 and T4 time points and one administrator completed an interview. Programme faculty and administrators were mostly male and included Basotho, American and German nationalities. All three peers invited to participate in the evaluation are female and are from the United States. Of these, only one peer provided an emailed interview response and two provided survey responses.
When evaluating trends in overall research confidence scores amongst all trainees ( n = 8), we saw an increase in the median score of 1.7 (range 1.0–2.9) at T1 to a median of 2.6 (range 2.1–3.1) at T2. There was no further increase at T3, however, with the median score remaining the same at 2.6 points on a five-point scale. A Friedman test of differences amongst repeated measures conducted on the raw Likert scores for all trainees across these three time points was statistically significant ( X 2 = 37.33, p < 0.001). A two-sided sign test conducted on trainees’ T1 compared with T3 raw Likert scores showed a positive median increase of one point. shows the change in graduates’ ( n = 4) research confidence scores over all four time points. As an aggregate, we saw a 1.8 point rise in median research confidence scores between T1 (1.3, range 1.0–2.9) and T4 (3.1, range 2.4–4.3), although gains varied by individual graduate. A Friedman test on the raw Likert scores of the graduates across all four time points was also statistically significant ( X 2 = 34.48, p < 0.001). A two-sided sign test conducted on graduates’ T1 as compared with T4 raw Likert scores showed a positive median increase of two points.
Ten semi-structured interviews, two written interview answers and all 36 open-ended survey responses were analysed. Results were organised by the six domains of Cooke’s framework. Where appropriate, distinction is made between the individual, organisational and supra-organisational levels at which these domains operate. We share additional findings specific to peer mentorship that emerged from the data. Representative quotes are identified using the numerical code of the participant. Trainees who began the research curriculum of the FMSTP as a second year have codes starting with two, whilst trainees who began the research curriculum as a third year have codes starting with three. See for a summary of results.
Trainees built research skills and confidence primarily through the practical experience of conducting a mentored research project. This intentional ‘learning-by-doing’ approach placed research mentorship in a central role to support trainees’ research skill development: ‘Unless you’re using that information (…) it doesn’t mean anything, you have to apply it to a project and have a mentor working with you on the project for that to make sense.’ (Faculty 191, Male) Although research skills increased, confidence in using those skills, for example, to go on and be a peer mentor themselves, was limited for most trainees. Just one trainee expressed that he or she would feel confident being a peer mentor, but others were hesitant, not feeling ready yet: ‘From the experience that I have now, I think I can be able to help people and show them how to find information.’ (Trainee 342, Female) ‘At the moment I can’t be a mentor because I still need to be mentored.’ (Trainee 343, Female) In addition, a highly applied research training curriculum can also result in trainees developing skills that relate to their own project but lack a broader familiarly with other research types. This resulted in many trainees feeling limited in their self-efficacy of overall research abilities: ‘I can’t appreciate lots of things with research. I just know what I’m doing.’ (Trainee 343, Female)
Trainees consistently made statements demonstrating their understanding that research has the potential to greatly impact local practice and that community-based research is an important vehicle for physicians to learn about their community: ‘Up until you have done the research, met the community, got the information from the community, it’s up until then that you get the reality of what’s going on (…) this has just opened my eyes.’ (Trainee 341, Male) To ensure that research does remain ‘close to practice’ research topic selection and questions must be defined locally. Trainee research should flow from the COPC portion of the FMSTP curriculum and must be ‘operational’, designed to solve a problem that exists in the community or institution: ‘The problems and the project come from them, not any external suggestion or anything from well-meaning peer mentors.’ (Faculty 112, Female) ‘Focusing on operational research that flows from the COPC is going to make it more useful (…) [ it will ] reinforce COPC which is one of the core aspects of family medicine in this region.’ (Administrator 142, Female)
The peer mentorship itself was an important collaborative experience that was built into the FMSTP approach to research training. Having peers come from abroad provided an additional ‘excitement’ factor: ‘It’s so exciting (…) to have those kind of relationships (…) ‘I have someone that I can talk to from abroad.’ (Administrator 142, Female) The programme also included an opportunity for trainees to present their research at the World Organization for Family Medicine’s (WONCA) Africa Regional Conference. This provided an opportunity for regional networking and idea exchange: ‘It was fulfilling to exchange ideas with people from different cultural backgrounds.’ (Trainee 231, Male) The findings highlight the importance of research mentorship to foster linkages that can be sustained throughout one’s career: ‘There’s an ongoing relationship after that, ideally. And not just with the mentor, but with other people who are doing similar things. You have a cohort of people who are then your peers in that research area (…) they have a long, lifetime often, relationship around the work they are doing.’ (Faculty 191, Male) A number of the trainees’ research projects engaged nurses to support with data collection, for example, to co-facilitate a focus group on barriers to cervical cancer screening. There was limited other discussion in the interviews, however, regarding local stakeholder engagement, inter-professional research collaborations or linkages with policy makers.
Trainees were asked on the written surveys about their goals for their research projects and many expressed a desire to disseminate their results both within Lesotho and internationally: ‘Want it to be published nationally and globally.’ (Trainee 233, Female) ‘To present it at [ a ] symposium, nationally and internationally.’ (Trainee 231, Male) Beyond these traditional means of dissemination via presentations and publications, trainees did also express goals that acknowledge research’s potential to impact local health practices: ‘Improve cervical cancer screening in my district because it really is preventable.’ (Trainee 341, Male) Lacking, however, were more specific plans for strategic local dissemination that would lead to changes in their clinical environments or health policy.
Trainees expressed an appreciation for the continuity of the mentorship relationship with their peers throughout the research training experience: ‘Very helpful and [ they ] guide us through every step.’ (Trainee, anonymous evaluation) Participants spoke about the importance of research for family medicine, seeing it as a way to improve practices and overall systems, and reveal what is going on in their communities. This, in addition to a number of trainees explicitly stating their desire to continue engaging in research, indicates that a positive research culture was fostered through the programme: ‘There’s so much more to know about my community.’ (Trainee 341, Male) ‘So family medicine is creating this culture of research in us, [ a ] culture of living and doing research.’ (Trainee 232, Male) Apart from the idea of involving the graduates as local near-peer research mentors, no other specific plans, funding or structure for continued involvement in primary care research beyond trainees’ FMSTP research project were discussed.
A number of infrastructure-related challenges were noted at all levels of Cooke’s framework. Firstly, trainees did not have protected time to dedicate to their research projects and struggled with time and overall project management: ‘We are supposed to do the work on our own time, so I think that one was a bit difficult because sometimes we are overrun by the other jobs, so we ended up giving minimum time to research.’ (Trainee 342, Female) Also, at the individual level, a lack of funding to support the costs of conducting the research was noted: ‘They talk about issues of resources, even if it seems small, as being difficult. Like making copies, having toner, having pens, having adequate space and time to complete everything.’ (Faculty 132, Male) At the organisation level, there is a need for research capacity building among the Lesotho-based FMSTP faculty. This will improve their provision of academic supervision for the trainees and remove their dependence on Boston for this support: ‘Let us also make sure that our faculty members get the capacity they need, so that when there is no such programme, they can be able to play that part.’ (Administrator 142, Female) Additionally, there was a lack of comprehensive coordination of the research curriculum and mentorship programme. Many felt pressure in relation to the programme timeline and expressed that better organisation and coordination would have helped them feel more prepared: ‘I wish that the objectives could be very clear (…) so that everybody knows from the beginning what they are expected to do. And then we don’t get surprises in the middle of the programme.’ (Trainee 341, Male) The research mentorship occurred largely virtually, which was described as a flexible and effective way to have questions answered promptly: ‘Even when she left Lesotho to the States, she kept communication going on because she opened up a WhatsApp group (…) So, in a way, she’s always in Lesotho. Every time I say anything, [ she ] will just pop up and answer me, at any point, at any time.’ (Trainee 341, Male) In contrast, some trainees also expressed that virtual mentorship has limitations, and sometimes, in-person support would be preferred. National infrastructure challenges of having a poor network and the cost of data in Lesotho were also discussed: ‘We have just recently begun to have phone contact for each other, as it was sometimes a bit difficult with emails alone.’ (Trainee, anonymous evaluation) ‘Communication can be difficult over internet, data in Lesotho is expensive, and the network is bad many times.’ (Faculty, anonymous evaluation) In Lesotho, there is one national IRB through which all human subjects research protocols are evaluated. The approval process was lengthy for all trainees and resulted in delays in starting their projects. Additionally, there is only limited locally produced data to inform research studies: ‘The same studies might have been done abroad, or somewhere before, but the local data is not there.’ (Trainee 232, Male)
In addition to the themes found in relation to Cooke’s framework, another major theme relates to the use of peer mentorship as a unique approach to support research capacity building. Faculty, administrators and trainees alike noted the unique benefits to having a peer as part of the research mentorship process. Because of less hierarchy in a peer relationship, trainees feel open and are more comfortable asking questions. This was contrasted with supervisors, whom trainees often noted that they feel less comfortable with. Peers use informal communication methods that make them easily accessible and many times were referenced as a ‘friend’: ‘It’s easier to talk with [ a ] peer mentor than the supervisors. And they are informally available, on WhatsApp or Facebook.’ (Trainee 343, Female) ‘Lowers barriers to ask for assistance and it promotes motivation.’ (Faculty 122, Male) ‘My understanding is that a peer is someone whom we can be friends with (…) and is someone that I can freely learn from.’ (Administrator 142, Female) As peers have been through the training process recently and are themselves managing similar challenges, they are relatable. These peer relationships were described as motivating and good at providing ‘moral support’. A limitation to peer mentorship is that they may not have the time to be as responsive as desired because they are themselves in-training or busy early-career physicians. Additionally, there were concerns about the role of the peer and whenever they have sufficient expertise to provide valuable research mentorship: ‘She tries by all means to be quick at response, but certain times it might not be as quick as expected.’ (Trainee 232, Male) ‘Some US residents are not equipped or available enough to be good peer mentors.’ (Faculty, anonymous evaluation) ‘I wish I had more research knowledge to bring to the table.’ (Peer mentor 172, Female) Throughout the interviews, respondents enumerated specific characteristics of good peer mentors. These included being: (1) good communicators, including being adept listeners and prompt with responses, (2) dedicated and committed, (3) friendly and non-judgemental, (4) good at time management, and specifically being able to model and teach time management, (5) experienced in research and (6) familiar with local context of mentee. Triangulation of quantitative and qualitative results Moderate increases in research confidence among trainees were seen in both survey results and via the interviews. After completion of the programme, the four graduates’ median research confidence score of 3.1 represented a significant improvement from their pre-programme score; however, it still only reached moderate levels of confidence on the 5-point Likert scale. The interviews allow us to unpack these scores. Trainees expressed that while they do feel ‘a little’ confident, most do not yet feel sufficiently confident to do a research project independently, nor mentor another without additional support: ‘I feel a little confident. We did the protocol (…) and I feel okay, it’s doable, I can do it, maybe.’ (Trainee 343, Female) ‘Continue to get better at it until I am able to mentor someone else.’ (Trainee, anonymous evaluation)
Moderate increases in research confidence among trainees were seen in both survey results and via the interviews. After completion of the programme, the four graduates’ median research confidence score of 3.1 represented a significant improvement from their pre-programme score; however, it still only reached moderate levels of confidence on the 5-point Likert scale. The interviews allow us to unpack these scores. Trainees expressed that while they do feel ‘a little’ confident, most do not yet feel sufficiently confident to do a research project independently, nor mentor another without additional support: ‘I feel a little confident. We did the protocol (…) and I feel okay, it’s doable, I can do it, maybe.’ (Trainee 343, Female) ‘Continue to get better at it until I am able to mentor someone else.’ (Trainee, anonymous evaluation)
The evaluation demonstrated that FMSTP research curriculum and peer mentorship programme were successful in positively impacting a number of Cooke’s research capacity building domains. Firstly, research skills and confidence increased moderately among FMSTP trainees over the two-year evaluation period. Graduate’s research confidence scores increased as a group, although individual graduates varied. Specifically, graduate 342 started with low confidence that increased substantially, while graduate 341 started with relatively high confidence and made minimal gains. All graduates completed their research projects within their four-year training programme and published research articles in Lesotho Medical Association Journal . , , , One graduate presented her final research results at the South African Academy of Family Physicians 22nd National Family Practitioners Congress in August 2019. The programme was also successful in creating research experiences that were grounded in the trainee’s clinical practice and enhancing of the overall ‘culture of research’ within the FMSTP, which is promising for sustainability. As was noted by Cooke, these latter two successes feed one another; the more that the trainees engage in ‘useful’ and locally relevant that is ‘close to practice’, the more they value research engagement. According to Mash et al., the development of a research culture is ‘essential’ to advance African primary care research capacity building. Other key findings of the evaluation included the identification of specific benefits of having peers as research mentors. Trainees feel comfortable with peers, who are seen as friends and are readily available via informal communication methods. Virtual delivery of peer mentorship is possible and itself has the benefit of being highly flexible, although it is limited by Internet connectivity and cost. Teaching research via an applied, learning-by-doing approach is valuable. However, it must be balanced with formal instruction on research theory and opportunities to learn about other trainees’ research projects to ensure exposure to a variety of research methodologies. The evaluation identified that more work needs to be performed within the domains of linkages and collaboration, specifically with regard to community engagement, appropriate dissemination and continuity and sustainability. Another key finding of the evaluation was the identification of a number of research infrastructure-related gaps within the FMSTP. These included insufficient protected time, lack of funding for research-related costs and need for clearer organisation of the research curriculum and mentorship structure. Many of our findings regarding peer research mentorship are similar to those found by Rukundo et al. in their study of near-peer mentorship of medical undergraduates by master’s students. Peers are able to increase the workforce within institutions that lack sufficient mentorship capacity and can bridge gaps between senior lecturers and learners. In both studies, participants mentioned feeling more ‘free’ to engage with and ask questions of peers than senior lecturers. Cole et al. focus on the need to create ‘safe spaces’ for mentorship to be able to thrive. They also highlight how focusing on co-learning can promote the development of mentorship ‘across hierarchies’. Our study supports this concept that the creation of a safe, non-hierarchal learning environment is a key benefit of peer mentorship. , Lescano et al. highlight a nuance to our study’s finding regarding peer mentorship’s ability to circumvent the hierarchy typically found within the medical training. They argue that mentorship culture in high-income countries (HICs) tends to be more horizontal, whereas in LMIC, it tends to be more strictly hierarchical. Thus, although it is tempting to attribute our findings exclusively to the fact that the mentor is a peer, it may also be because of our peer mentors coming from the United States, an HIC. These peers may have brought their own cultural norms and approach towards mentorship that could, in turn, influence the lack of hierarchy, just as much as the peer aspect. Characteristics of a ‘good’ peer mentor that were identified in our study, such as being committed, available, experienced in research and a good listener, mirror those found in recently published LMIC research mentorship competencies. , The described FMSTP research training curriculum and mentorship structure lacked sufficient organisation, causing the trainees to feel uncertain in terms of expectations. Literature evaluating family medicine resident scholarship has specifically identified uncertainty as a key barrier to research engagement. The infrastructure gaps we identified, including lack of protected time and research funding, are common within family medicine residency training programmes worldwide. A study of family physicians in Kenya identify these same barriers as important limitations to continuing research engagement beyond residency training, despite ongoing interest and recognition of research as important to family medicine practice. Using the domains of Cooke’s framework to organise our analysis had a number of benefits; however, there were also some limitations to its use. Some gaps that were noted in specific domains are likely partially attributable outside factors, such as the timing of the interviews and the content of the interview guide. The interviews took place mid-way through the research curriculum. This timing may be the primary reason the interviews failed to capture specific plans for local dissemination of trainees’ research results. Similarly, we have only limited information about continuity and sustainability of the programme given this timing. The interview guides were not explicitly developed to capture all of Cooke’s framework domains, and thus, gaps in areas such as linkages and collaboration may be because of the failure of asking specifically for this information. Other limitations include that this was an evaluation of a single training programme, and thus, our case study may not be generalisable to family medicine programmes in other contexts. Our small sample size and use of averages to report Likert-scale data limits interpretation of changes in research confidence. As the authors were involved in both implementation and evaluation of the programme, the study may be subject to researcher bias. Response bias may also limit the findings, which were heavily drawn from the semi-structured interviews. Two of the eight registrars were not able to be interviewed, and only one peer mentor provided written responses to the interview questions, but no full interviews were conducted with peers. A final limitation was that the evaluation did not systematically capture the level of engagement of each trainee with the research curriculum or with their peer mentorship. This limits the interpretations of findings because we cannot accurately report on the dose of the intervention received by each trainee. These limitations are mitigated, however, by the use of triangulation during this longitudinal mixed-methods evaluation. We elicited perspectives from trainees, faculty, administrators and peers both via anonymous written feedback and via in-depth interviews. Other methodological strengths included rigorous quality assurance of all transcriptions, an iterative process of developing the codebook with the inputs of three authors, consistent use of memoing focusing on reflexivity and using member checking to ensure the validity of our findings. This evaluation was successful in its objective of identifying ways to improve quality of the FMSTP research training and mentorship approach. In November 2019, C.M. presented the findings of this study to FMSTP faculty. This resulted in several changes, such as a more clearly structured research curriculum, including explicit learning outcomes that were added to the FMSTP training portfolio. The overall curriculum was reorganised to ensure that trainees do a COPC needs assessment before initiating their research projects. Research-in-progress meetings were added to quarterly contact sessions and trainees will now have approximately eight afternoons of protected time for research during specific clinical rotations each year. A revised structure for research mentorship and supervision was implemented, including the use of local near-peer research mentors, who are the four recently graduated family medicine specialists. Future studies are needed to assess the value of the changes made within the FMSTP based on this evaluation. In addition, although our study supports the use of peer and distance mentorship, further work is needed to understand how these strategies may be useful in other contexts.
Equipping family physicians with the capacity to ask and answer important research questions in their communities holds the promise of improving primary care health delivery, especially in a resource-limited context such as Lesotho. This evaluation of the FMSTP research curriculum and peer mentorship programme directly resulted in a number of specific improvements that are being implemented, including better organisation of the research training curriculum, the addition of protected time for research and the use of recent graduates as near-peer research mentors. Our example of using Cooke’s framework to evaluate our programme may guide further research in this crucial area of research capacity building for family physicians in LMICs.
|
Mesothelioma cell heterogeneity identified by single cell RNA sequencing | d5229fc1-15e5-4733-8a7b-6299b79526f7 | 11906801 | Cytology[mh] | Pleural mesothelioma (PM) is a notorious cancer characterized by an escalating incidence and formidable clinical management challenges, often culminating in a grim prognosis . Recognized as heterogeneous both histologically and molecularly, MESO manifests diverse cell populations within tumors, a phenomenon termed tumor cell heterogeneity. Histological diversity in PM encompasses three primary types—epithelioid, sarcomatoid and biphasic morphologies, each demonstrating substantial associations with clinical outcomes . Mesothelioma cell heterogeneity embodies a multifaceted interplay of varied cellular subpopulations within tumors, characterized by disparities in genetic, morphological, and functional attributes , . This inherent diversity poses formidable hurdles in devising effective diagnostic modalities and targeted therapeutic approaches for MESO. A comprehensive analysis of the nuances of mesothelioma cell heterogeneity holds significant potential for advancing personalized treatment paradigms and ultimately enhancing patient outcomes. The tumor microenvironment plays a pivotal role in driving tumor progression, invasion, and metastasis across various cancers, including PM .The intricate interplay between tumor cells and their surrounding microenvironment has been recognized for decades. Particularly, the epithelial–mesenchymal transition (EMT) process stands out as a key contributor to the dismal prognosis associated with mesothelioma. Through our previous investigations, we have identified a panel of EMT genes that appears to be unique to mesothelioma tumors . Notably, the up-regulation of this specific EMT gene signature correlates strongly with diminished survival rates among PM patients . These findings underscore the critical importance of elucidating the underlying molecular mechanisms to inform potential therapeutic strategies and prognostic assessments in PM. There are a lot of studies that have been dedicated to exploring cancer cell heterogeneity including mesothelioma – . Notably, a study identified 12 expression programs exhibiting heterogeneity across various cancer cell lines. These programs were associated with diverse biological processes, including the cell cycle, senescence, stress and interferon responses, EMT, and protein metabolism. This highlights the intricate molecular landscape underlying cancer cell heterogeneity , . A recent study provides the first comprehensive single-cell transcriptomic atlas of the human parietal pleura, offering unprecedented resolution of its cellular composition, which identifies novel pleural-specific fibroblast subtypes, characterizes in vitro models of mesothelial cells, and compares them with in vivo data, enhancing understanding of pleural biology , . Another study used scRNA-seq, paired with other genomic and histologic analyses, to explore the EMT of PM malignant cells and their tumor microenvironment. It identified distinct malignant cell programs for epithelioid and sarcomatoid histologies, a new uncommitted EM phenotype in biphasic tumors, and signaling pathways as potential drivers of PM cell fate. These findings offer valuable insights into PM biology and highlight non-malignant cell signals as contributors to EMT and tumor progression . This comprehensive analysis sheds light on the diverse molecular profiles within MESO cell populations, emphasizing the importance of considering heterogeneity in understanding cancer biology and developing targeted therapies , . MESO has limited treatment options, and its subtypes can influence treatment responses. The processes of EMT and the composition of immune cell populations are closely associated with the effectiveness of immunotherapy . Up to date, there is a scarcity of specific studies investigating circulating mesothelioma cell heterogeneity at the transcriptomic level . In a ground-breaking study by Mangiante et al., a thorough examination of whole-genome sequencing data, coupled with transcriptomic and epigenomic data, was conducted using multiomics factor analysis. The results revealed four distinct dimensions that were found to be complementary. These dimensions effectively captured significant interpatient molecular disparities by emphasizing extreme phenotypes indicative of interdependent tumor specialization. Importantly, these findings shed light on the intricate interplay between the functional biology of PM and its genomic background, thereby offering valuable insights into the diverse clinical manifestations observed among PM patients . Tumor cell heterogeneity presents significant challenges in the realm of cancer treatment, emphasizing the critical need to comprehend and characterize this complexity to advance the development of efficacious cancer therapeutics. Leveraging technological breakthroughs, such as scRNA-seq, has emerged as a pivotal tool in deciphering the intricacies and dynamics of tumor heterogeneity. This technology enables the identification of potential therapeutic targets and the formulation of personalized treatment strategies. In the context of this study, our objective is to delineate mesothelioma cell heterogeneity using scRNA-seq technique in both in vitro and in vivo conditions. By employing this advanced sequencing method, we aim to shed light on the diverse cell populations within PM tumors, providing a comprehensive understanding of their molecular landscape, including in circulating tumor cells. These new findings are anticipated to contribute to the development of novel strategies for prognosis and therapeutics not only in PM but also other types of cancer. Ultimately, the insights gained from this research endeavour have the potential to pave the way for targeted and more effective treatments, improving patient outcomes and advancing the field of cancer therapeutics. Through a deeper understanding of mesothelioma cell heterogeneity, we aim to make significant strides in the ongoing efforts to combat this challenging and notorious cancer.
Study design and data integration Three distinct groups of mesothelioma RN5 cells were prepared from separate experimental conditions: cultured cells (CC) derived from in vitro cell line culture, circulating tumor cells (CTC), and peritoneal lavage tumor cells (Lav), both obtained in vivo from peripheral blood or peritoneal lavage, respectively, from tumor-bearing mice at 4 weeks post RN5 cell intraperitoneal ( ip ) injection (Fig. A). Enriched CTC (MSLN + CD45 − ) from peripheral blood were prepared using a MACS column and a microfluidic chip (Fig. S). Subsequently, merged data were analyzed to elucidate the transcriptomic characteristics of tumor cell clusters. Among the total of 32,371 cells analyzed, 8208 were identified as tumor cells based on the expression of mesothelioma markers Msln , Wt1 , and Sparc (Fig. B). Expression levels of tumor cell markers varied across different clusters. Six clusters of merged tumor cells were characterized by heatmap and volcano plots illustrating top up- and down-regulated genes (Figs. S & S). Additionally, all genes exhibiting significant changes within each cluster were documented (Table 1S). The subsequent analysis focused on discerning differences among the three groups, as well as elucidating distinct features following reclustering of each group. Tumor cell identification and reclustering The total number of tumor cells (8208) annotated from merged RN5 cells underwent analysis through t-distributed stochastic neighbor embedding (tSNE) clustering. This analysis was performed on cultured cells (CC), lavage cells derived from a mouse ip model (Lav), and circulating tumor cells within peripheral blood mononuclear cells (PBMC) of a mouse ip model (CTC). These cells expressed the genes Msln , Wt1 , and Sparc . Each group, divided by sample ID, was then reclustered into subpopulations: CC into 6 clusters, CTC into 4 clusters, and Lav into 5 clusters (Fig. A). Top 50 up- regulated genes within each group of RN5 cells from CC, CTC, and Lav were identified (Fig. B). Additionally, the top 10 genes exhibiting significant changes (Log2 Fold change > 1 or < −1, and p < 0.05) from each subcluster in the three groups were visualized in heatmaps (Fig. C). Top 30 up- or down-regulated genes were presented in each cluster of the three groups (Fig. S), while volcano plots displayed all genes with significant changes in each cluster (Fig. S). Interestingly, the most upregulated genes in CTC were related to angiogenesis and platelets activation (Ppbp, Gp9, Clec11b), while the most upregulated genes in Lav were related to complement activation Q1qa, C1qb), suggesting that platelets could play a particularly important role in tumor metastasis in mesothelioma. Furthermore, all up-regulated genes meeting the criteria (Log2 Fold change > 1.0 and p < 0.05) were identified in CC (258 genes), CTC (147 genes), and Lav (105 genes) (Table 2S). Hallmark pathways associated with up-regulated genes The hallmark pathways prominently enriched among the top 50 up-regulated genes in the CC group include MYC targets v1 and v2, E2F targets, mTORC1 signaling, unfolded protein response, and G2M checkpoint. These pathways are known for their canonical roles in regulating cell cycle progression and proliferation (Fig. A). In contrast, the CTC group exhibits significant enrichment in hallmark pathways such as coagulation, TNFa signaling via NFkB, complement, apoptosis, and epithelial–mesenchymal transition (EMT). This suggests that the overexpression of genes in CTCs may be associated with the promotion of cancer cell stemness in mesothelioma (Fig. B). Notably, among the top 10 hallmark pathways, EMT emerges as the most significant pathway in the context of tumor microenvironment. The panel of genes associated with EMT was predominantly related to the extracellular matrix (EMC) including genes that are characteristic of cancer associated myofibroblasts (myCAF) such as ACTA2 , CD44 , and FN1 . EMT was also associated with the emergence of an IFN-α and IFN-γ response, suggesting the EMT process may be associated with an immunogenic response. Overall, this supports a notion that the tumor microenvironment, and in particular, myCAFs may contribute to the EMT process as a mechanism of escape from the immunogenic response (Fig. C). Gene ontology (GO) annotations of the up-regulated genes Gene Ontology (GO) term annotations in biological process (BP), cellular component (CC), and molecular function (MF) categories were identified using GSEA ( https://www.gsea-msigdb.org/gsea/login.jsp ). The top 10 important GO term annotations for cultured cells (CC) (Fig. A), circulating tumor cells (CTC) (Fig. B), and peritoneal lavage cells (Lav) (Fig. C) were determined. Interestingly, GO terms in each group are unique, with no overlap among the three groups. Specifically, the CC group does not share any GO terms with either the CTC or Lav group. However, the CTC and Lav groups exhibit some overlaps: 4 overlaps in biological processes (GO BP) (Fig. D), 3 overlaps in cellular components (GO CC) (Fig. E), and 6 overlaps in molecular functions (GO MF) (Fig. F). This highlights distinct functional annotations associated with each cell type and underscores the heterogeneous nature of mesothelioma cells in different microenvironments. Up-regulated genes in subclusters of cultured tumor cells with more tendency to promote cell proliferation As anticipated, tumor cells cultured under optimized conditions exhibit a propensity for rapid proliferation. Gene set enrichment analysis (GSEA) revealed significant enrichment of hallmark gene sets associated with cell proliferation, including MYC targets v1 and v2, E2F, MTORC1 signaling, unfolded protein response, and G2M checkpoint, within the CC group with up-regulated genes (Fig. A). Gene sets of MYC targets v1 and v2, E2F, MTORC1 signaling had a large number of overlapped genes with CC up-regulated genes, however, only a few or no genes were found overlapped with CTC or Lav up-regulated genes (Fig. S). Velocity analysis did not show significant change in the trend of pseudo time along the cell population (Fig. 7S). To further explore the heterogeneity within the CC group, we conducted reclustering, resulting in the identification of 6 subpopulations (Fig. A). Subsequently, the up-regulated genes within each subpopulation were analyzed for overlaps with cell proliferation gene sets, expressed as percentages. Remarkably, the percentages of overlapping genes in CC clusters 2 and 3 exceeded 40%, which was notably higher than the average overlaps observed in other clusters within the CTC and Lav groups (Fig. A). This suggests that subpopulations within the CC group, particularly clusters 2 and 3, may exhibit heightened proliferative potential compared to other clusters and groups. Circulating tumor cells appear to possess more stemness property determined by stemchecker Compared to other groups, the overlaps of up-regulated genes in each cluster of the CC group exhibited the least overlaps with stem cell gene sets, with the exception of the spermatogonial stem cell set, which showed approximately 2% overlaps in clusters 4 and 5. Conversely, in the CTC group, cluster 3 displayed significantly higher overlaps with embryonic stem cell, neural stem cell, intestinal stem cell, and hematopoietic stem cell gene sets compared to the other clusters. However, in the Lav group, clusters 4 and 5 exhibited the highest overlaps (5.15% and 6.12%, respectively) with the spermatogonial stem cell gene set, while showing lower overlaps with other stem cell types (Fig. B). Given that EMT ranks as the top upregulated pathway in the Lav group, we specifically investigated the overlaps of up-regulated genes in each group with the hallmark EMT gene sets in both human and mouse. EMT pathway ranks top hallmark gene enrichment in Lav group The human and mouse gene sets of hallmark EMT comprise 200 and 194 genes, respectively. Using the InteractiVenn program, we calculated the overlaps of up-regulated genes in each group with the human or mouse EMT gene sets. Among the up-regulated genes, 5 genes (1.98%) in CC, 9 genes (6.12%) in CTC, and 14 genes (13.33%) in Lav overlapped with human EMT gene identifiers. Similarly, 5 genes (1.98%) in CC, 9 genes (6.12%) in CTC, and 13 genes (12.38%) in Lav overlapped with mouse EMT gene identifiers. The gene names of those overlapping with each group were also included (Fig. C). Notably, the Lav group exhibited the highest overlaps with the hallmark EMT gene set, including genes such as COL3A1 , CD44 , COL6A2 , FN1 , FBLN1 , FBLN2 , LGALS1 , IGFBP2 , WNT5A , LOXL2 , GJA1 , ACTA2 , ELN , and CDH11 . These genes are associated with EMT processes and indicate a potential role for EMT in the Lav group. The association of GSVA score with activity of cancer-related pathways and overall survival in MESO cohort TCGA The differences in cancer-related pathway activity between high and low GSVA score groups in mesothelioma were analyzed using the GSVA platform ( https://guolab.wchscu.cn/GSCA/ #/). The association between GSVA score and activity of cancer-related pathways revealed distinct patterns among the CC, CTC, and Lav groups. In the CC group, the GSVA score of up-regulated genes was positively correlated with cell cycle and apoptosis pathways, while being negatively correlated with the RAS-MAPK pathway. Conversely, in the CTC and Lav groups, the GSVA scores of up-regulated genes were negatively correlated with cell cycle and apoptosis pathways, but positively correlated with the RAS-MAPK pathway in the Lav group. Notably, in the Lav group, both the EMT and RAS-MAPK pathways were significantly up-regulated (Fig. A). We also looked at the correlation of the GSVA score of up-regulated genes in each group with tuimor-infiltrating lymphocytes in MESO, TCGA, and showed that the overall immune cell infiltration score is higher in CC genes compared with CTC and Lav groups (Fig. B), indicating that up-regulated gene expression of Lav and CTC may result in more immunosuppressive microenvironment. Furthermore, survival analysis using TCGA database revealed interesting findings. The up-regulated genes in the CC group exhibited a highly significant impact on the survival of patients with mesothelioma. Higher levels of gene expression in the gene signature were associated with poorer prognosis compared to the group with lower levels (Logrank p value = 0.00018. However, the other two groups, CTC and Lav, did not show a significant impact on patient survival, with Logrank p values of 0.68 and 0.11, respectively (Fig. C). Taken together, the high level of gene expression driving cell cycle and proliferation in the CC group may indicate significant prognostic value in PM.
Three distinct groups of mesothelioma RN5 cells were prepared from separate experimental conditions: cultured cells (CC) derived from in vitro cell line culture, circulating tumor cells (CTC), and peritoneal lavage tumor cells (Lav), both obtained in vivo from peripheral blood or peritoneal lavage, respectively, from tumor-bearing mice at 4 weeks post RN5 cell intraperitoneal ( ip ) injection (Fig. A). Enriched CTC (MSLN + CD45 − ) from peripheral blood were prepared using a MACS column and a microfluidic chip (Fig. S). Subsequently, merged data were analyzed to elucidate the transcriptomic characteristics of tumor cell clusters. Among the total of 32,371 cells analyzed, 8208 were identified as tumor cells based on the expression of mesothelioma markers Msln , Wt1 , and Sparc (Fig. B). Expression levels of tumor cell markers varied across different clusters. Six clusters of merged tumor cells were characterized by heatmap and volcano plots illustrating top up- and down-regulated genes (Figs. S & S). Additionally, all genes exhibiting significant changes within each cluster were documented (Table 1S). The subsequent analysis focused on discerning differences among the three groups, as well as elucidating distinct features following reclustering of each group.
The total number of tumor cells (8208) annotated from merged RN5 cells underwent analysis through t-distributed stochastic neighbor embedding (tSNE) clustering. This analysis was performed on cultured cells (CC), lavage cells derived from a mouse ip model (Lav), and circulating tumor cells within peripheral blood mononuclear cells (PBMC) of a mouse ip model (CTC). These cells expressed the genes Msln , Wt1 , and Sparc . Each group, divided by sample ID, was then reclustered into subpopulations: CC into 6 clusters, CTC into 4 clusters, and Lav into 5 clusters (Fig. A). Top 50 up- regulated genes within each group of RN5 cells from CC, CTC, and Lav were identified (Fig. B). Additionally, the top 10 genes exhibiting significant changes (Log2 Fold change > 1 or < −1, and p < 0.05) from each subcluster in the three groups were visualized in heatmaps (Fig. C). Top 30 up- or down-regulated genes were presented in each cluster of the three groups (Fig. S), while volcano plots displayed all genes with significant changes in each cluster (Fig. S). Interestingly, the most upregulated genes in CTC were related to angiogenesis and platelets activation (Ppbp, Gp9, Clec11b), while the most upregulated genes in Lav were related to complement activation Q1qa, C1qb), suggesting that platelets could play a particularly important role in tumor metastasis in mesothelioma. Furthermore, all up-regulated genes meeting the criteria (Log2 Fold change > 1.0 and p < 0.05) were identified in CC (258 genes), CTC (147 genes), and Lav (105 genes) (Table 2S).
The hallmark pathways prominently enriched among the top 50 up-regulated genes in the CC group include MYC targets v1 and v2, E2F targets, mTORC1 signaling, unfolded protein response, and G2M checkpoint. These pathways are known for their canonical roles in regulating cell cycle progression and proliferation (Fig. A). In contrast, the CTC group exhibits significant enrichment in hallmark pathways such as coagulation, TNFa signaling via NFkB, complement, apoptosis, and epithelial–mesenchymal transition (EMT). This suggests that the overexpression of genes in CTCs may be associated with the promotion of cancer cell stemness in mesothelioma (Fig. B). Notably, among the top 10 hallmark pathways, EMT emerges as the most significant pathway in the context of tumor microenvironment. The panel of genes associated with EMT was predominantly related to the extracellular matrix (EMC) including genes that are characteristic of cancer associated myofibroblasts (myCAF) such as ACTA2 , CD44 , and FN1 . EMT was also associated with the emergence of an IFN-α and IFN-γ response, suggesting the EMT process may be associated with an immunogenic response. Overall, this supports a notion that the tumor microenvironment, and in particular, myCAFs may contribute to the EMT process as a mechanism of escape from the immunogenic response (Fig. C).
Gene Ontology (GO) term annotations in biological process (BP), cellular component (CC), and molecular function (MF) categories were identified using GSEA ( https://www.gsea-msigdb.org/gsea/login.jsp ). The top 10 important GO term annotations for cultured cells (CC) (Fig. A), circulating tumor cells (CTC) (Fig. B), and peritoneal lavage cells (Lav) (Fig. C) were determined. Interestingly, GO terms in each group are unique, with no overlap among the three groups. Specifically, the CC group does not share any GO terms with either the CTC or Lav group. However, the CTC and Lav groups exhibit some overlaps: 4 overlaps in biological processes (GO BP) (Fig. D), 3 overlaps in cellular components (GO CC) (Fig. E), and 6 overlaps in molecular functions (GO MF) (Fig. F). This highlights distinct functional annotations associated with each cell type and underscores the heterogeneous nature of mesothelioma cells in different microenvironments.
As anticipated, tumor cells cultured under optimized conditions exhibit a propensity for rapid proliferation. Gene set enrichment analysis (GSEA) revealed significant enrichment of hallmark gene sets associated with cell proliferation, including MYC targets v1 and v2, E2F, MTORC1 signaling, unfolded protein response, and G2M checkpoint, within the CC group with up-regulated genes (Fig. A). Gene sets of MYC targets v1 and v2, E2F, MTORC1 signaling had a large number of overlapped genes with CC up-regulated genes, however, only a few or no genes were found overlapped with CTC or Lav up-regulated genes (Fig. S). Velocity analysis did not show significant change in the trend of pseudo time along the cell population (Fig. 7S). To further explore the heterogeneity within the CC group, we conducted reclustering, resulting in the identification of 6 subpopulations (Fig. A). Subsequently, the up-regulated genes within each subpopulation were analyzed for overlaps with cell proliferation gene sets, expressed as percentages. Remarkably, the percentages of overlapping genes in CC clusters 2 and 3 exceeded 40%, which was notably higher than the average overlaps observed in other clusters within the CTC and Lav groups (Fig. A). This suggests that subpopulations within the CC group, particularly clusters 2 and 3, may exhibit heightened proliferative potential compared to other clusters and groups.
Compared to other groups, the overlaps of up-regulated genes in each cluster of the CC group exhibited the least overlaps with stem cell gene sets, with the exception of the spermatogonial stem cell set, which showed approximately 2% overlaps in clusters 4 and 5. Conversely, in the CTC group, cluster 3 displayed significantly higher overlaps with embryonic stem cell, neural stem cell, intestinal stem cell, and hematopoietic stem cell gene sets compared to the other clusters. However, in the Lav group, clusters 4 and 5 exhibited the highest overlaps (5.15% and 6.12%, respectively) with the spermatogonial stem cell gene set, while showing lower overlaps with other stem cell types (Fig. B). Given that EMT ranks as the top upregulated pathway in the Lav group, we specifically investigated the overlaps of up-regulated genes in each group with the hallmark EMT gene sets in both human and mouse.
The human and mouse gene sets of hallmark EMT comprise 200 and 194 genes, respectively. Using the InteractiVenn program, we calculated the overlaps of up-regulated genes in each group with the human or mouse EMT gene sets. Among the up-regulated genes, 5 genes (1.98%) in CC, 9 genes (6.12%) in CTC, and 14 genes (13.33%) in Lav overlapped with human EMT gene identifiers. Similarly, 5 genes (1.98%) in CC, 9 genes (6.12%) in CTC, and 13 genes (12.38%) in Lav overlapped with mouse EMT gene identifiers. The gene names of those overlapping with each group were also included (Fig. C). Notably, the Lav group exhibited the highest overlaps with the hallmark EMT gene set, including genes such as COL3A1 , CD44 , COL6A2 , FN1 , FBLN1 , FBLN2 , LGALS1 , IGFBP2 , WNT5A , LOXL2 , GJA1 , ACTA2 , ELN , and CDH11 . These genes are associated with EMT processes and indicate a potential role for EMT in the Lav group.
The differences in cancer-related pathway activity between high and low GSVA score groups in mesothelioma were analyzed using the GSVA platform ( https://guolab.wchscu.cn/GSCA/ #/). The association between GSVA score and activity of cancer-related pathways revealed distinct patterns among the CC, CTC, and Lav groups. In the CC group, the GSVA score of up-regulated genes was positively correlated with cell cycle and apoptosis pathways, while being negatively correlated with the RAS-MAPK pathway. Conversely, in the CTC and Lav groups, the GSVA scores of up-regulated genes were negatively correlated with cell cycle and apoptosis pathways, but positively correlated with the RAS-MAPK pathway in the Lav group. Notably, in the Lav group, both the EMT and RAS-MAPK pathways were significantly up-regulated (Fig. A). We also looked at the correlation of the GSVA score of up-regulated genes in each group with tuimor-infiltrating lymphocytes in MESO, TCGA, and showed that the overall immune cell infiltration score is higher in CC genes compared with CTC and Lav groups (Fig. B), indicating that up-regulated gene expression of Lav and CTC may result in more immunosuppressive microenvironment. Furthermore, survival analysis using TCGA database revealed interesting findings. The up-regulated genes in the CC group exhibited a highly significant impact on the survival of patients with mesothelioma. Higher levels of gene expression in the gene signature were associated with poorer prognosis compared to the group with lower levels (Logrank p value = 0.00018. However, the other two groups, CTC and Lav, did not show a significant impact on patient survival, with Logrank p values of 0.68 and 0.11, respectively (Fig. C). Taken together, the high level of gene expression driving cell cycle and proliferation in the CC group may indicate significant prognostic value in PM.
Mesothelioma exhibits considerable heterogeneity in both cellular and molecular biology, contributing to variations in tumor behavior and treatment response. Tumor cell heterogeneity poses challenges in predicting prognosis and designing effective therapeutic approaches. Therefore, to better understand the molecular and cellular diversity in mesothelioma would be crucial for accurate prognosis. Patients with more heterogeneous tumors may experience different clinical outcomes, treatment responses, and survival rates. Identifying tumor cell heterogeneity through advanced molecular profiling techniques is able to tailor personalized therapy and improve prognosis , . This is a series of studies using murine mesothelioma cell line RN5, following determination of EMT gene signature from the up-regulated genes at all-time points in RN5-bearing mice5. RN5 cell line was established in Nf2 heterogeneous C57BL/6 mice with a characteristics of biphasic subtype. We used this cell line to identify a panel of EMT genes specifically in mesothelioma. In this study, we observed remarkable enrichment of key pathways, including MYC targets v1 and v2, E2F targets, mTORC1 signaling, unfolded protein response, and G2M checkpoint, in cultured RN5 cells. These pathways are well-known for their canonical roles in controlling cell cycle progression and proliferation. The MYC gene, in particular, has garnered significant attention in cancer research due to its pivotal role in cell cycle regulation and proliferation. MYC is a proto-oncogene that encodes transcription factors involved in regulating the expression of genes critical for cell growth, proliferation, and apoptosis. Its dysregulation has been implicated in various aspects of cancer development and progression . E2F is a family of transcription factors pivotal in orchestrating the expression of genes essential for DNA synthesis and cell proliferation. Central to cell cycle regulation, E2F plays a critical role in facilitating the transition from the G1 phase to the S phase. Among the transcriptional targets regulated by E2F are cyclins, cyclin-dependent kinases (CDK), checkpoint regulators, as well as DNA repair and replication proteins. Extensive evidence underscores the crucial involvement of E2F transcription factors in modulating cell proliferation .Tumors characterized by high E2F scores exhibit a significant enrichment in the expression of numerous cell cycle-related hallmark gene sets, including the G2M checkpoint, MYC targets v1 and v2, MTORC1 signaling, and unfolded protein response. The E2F pathway score serves as a reflection of underlying rapid cell proliferation within the tumor microenvironment . Our findings reveal that the up-regulated genes in the CC group prominently engage in MYC targets v1 and v2 as well as E2F targets, primarily governing cell cycle regulation and proliferation. Furthermore, the mammalian target of rapamycin complex 1 (MTORC1) emerges as a pivotal regulator of cell growth and proliferation. MTORC1 represents a protein complex that integrates diverse signals, including nutrient availability, energy status, and growth factors, to meticulously orchestrate cell cycle progression and cellular metabolism , . In addition, our analysis identified up-regulation of both the unfolded protein response and MTORC1 signaling pathways in the CC group. The up-regulated genes identified in the CTC group exhibited significant overlap with gene sets associated with various stem cell types, suggesting their potential involvement in maintaining cancer cell stemness. Remarkably, oncogenic KRAS has been implicated in augmenting classical stemness signaling pathways, with KRAS overexpression, alone or in combination with TP53 alterations, playing a pivotal role in MESO development and progression , . KRAS signaling is known to be crucial for stemness maintenance, as well as for regulating coagulation and complement pathways, which are vital for resolving inflammatory processes and facilitating wound healing, thus underscoring their regenerative capacity . MESO tumor encompasses epithelioid, biphasic, and sarcomatoid subtypes, each exhibiting distinct EMT phenotypes. Previous research has highlighted the association of certain EMT genes, such as COL5A2 , SPARC , and ACTA2 , with the upregulation of TGF-β1 signaling, hedgehog signaling, and IL-2-STAT5 signaling pathways 7. Among mesothelioma cells from CC, CTC and Lav groups, the up-regulated genes in the Lav group appear to predominantly govern the interaction between tumor cells and the microenvironment, including pathways associated with EMT. Dongre et al. elucidated new insights into the mechanisms underlying EMT and its implications for cancer. They highlighted how EMT can confer increased tumor-initiating and metastatic potential to cancer cells, as well as render them more resistant to certain therapeutic regimens . The murine mesothelioma RN5 cell line, characterized by biphasic morphology, exemplifies this phenomenon. Mesenchymal phenotypic tumor cells are known to drive the EMT process. Conventional therapies, such as cisplatin-based chemotherapy and gamma ray radiation, have been shown to result in the enrichment of mesenchymal stem cells , . EMT has emerged as one of the mechanisms contributing to therapy resistance in PM . More recently, cancer-associated fibroblasts (CAF) have been identified as key stromal cells driving the EMT process in cancer. Hu et al. discovered that CAF exosome LINC00355 promotes EMT and chemoresistance in colorectal cancer, further highlighting the intricate interplay between tumor cells and the tumor microenvironment in driving cancer progression and therapeutic resistance . Mesenchymal stromal cells (MSC) and fibroblasts often exhibit similar morphology, leading to challenges in distinguishing between the two cell types. However, studies have revealed that cell subsets with the MSC phenotype also display characteristics of fibroblasts. Notably, these cell subsets with the fibroblast phenotype do not necessarily demonstrate the MSC phenotype, suggesting a unidirectional relationship where fibroblasts may originate from MSC subsets . Recent advancements in scRNA-seq have significantly enhanced our understanding of mesothelioma’s cellular complexity, offering valuable information that can be leveraged to refine treatment selection and develop more effective, individualized therapeutic strategies. In conclusion, tumor cell heterogeneity is a fundamental characteristic of most cancers arising from the accumulation of genetic, epigenetic and functional changes in tumor cells. Phenotypic heterogeneity of tumor cells may be driven by genetic alterations, epigenetic modifications, or the influence of the tumor microenvironment. Tumor cells within a heterogeneous population can exhibit distinct functional properties. We mainly aimed to identify the unique gene signature of each mesothelioma cell cluster and the distinct functional properties of tumor cells within a heterogeneous subpopulation, showing that some tumor cells may possess stem cell-like characteristics and can self-renew and differentiate into other cell types, while others may have different proliferative potential. Functional heterogeneity can also involve variances in cellular behavior, such as migration, invasion, angiogenesis, and response to therapy. Our findings may provide clues to better understand the specific functions contributing to tumorigenesis and progression, so as to search potential novel targets for therapeutic strategies.
Murine mesothelioma RN5 cell culture and mouse models The murine mesothelioma cell line RN5 was initially derived from C57BL/6 mice subsequent to asbestos exposure by our research team . These RN5 cells exhibited biphasic morphology. Culturing conditions involved maintenance in RPMI1640 medium supplemented with 10% fetal bovine serum and 1% penicillin–streptomycin at 37 °C in a 5% CO2 atmosphere. To ensure cell line integrity, prophylactic treatment with 5 µg/ml Plasmocin™ (Invivogen) was administered for a minimum of 2 weeks, confirming mycoplasma-free status. For experimental procedures, exponentially growing RN5 cells (approximately 90% confluence) were prepared as follows:1) For scRNA-seq, RN5 cells (2 × 10^6 cells in 500 µl PBS) were submitted; 2) For intraperitoneal ( ip ) injection into 6–8 week-old C57BL/6 mice obtained from the Jackson Laboratories, RN5 cells (2 × 10^6 cells in 200 µl PBS) were administered. Over an 8-week observation period, five mice were sacrificed weekly, with naive mice serving as controls. Total cells were harvested via peritoneal lavage with PBS. Briefly, the peritoneal cavity was exposed and rinsed with 5 ml PBS per mouse, collecting the lavage. Tumor spheroids were removed by filtration through a 40 µm cell strainer (ThermoFisher). Fresh single cells obtained were utilized for scRNA-seq analysis, and 3) Peripheral blood collection occurred at 4 weeks post-tumor cell injection. Upon CO2 inhalation-induced euthanasia, approximately 8 ml of pooled blood was collected from 20 tumor-bearing mice. Enrichment of circulating tumor cells was performed utilizing the obtained blood samples. All experimental protocols were approved by the Committee of Animal Resources Centre, Animal Use Protocol (AUP#3399), University Health Network (UHN). All methods were carried out in accordance with ARRIVE guidelines and regulations. All methods were performed in accordance with the relevant guidelines and regulations in UHN. Enrichment of circulating tumor cells (CTC) from peripheral blood The enrichment of MSLN + CD45 − CTC from blood was conducted using a MACS column and a microfluidic chip using a protocol that we previously reported , . In brief, fresh blood collected from tumor-grafted mice underwent gradient centrifugation using the peripheral blood mononuclear cells (PBMC) isolation protocol with Leucosep tubes . In this protocol, 15 ml of Ficoll was added to a 50 ml Leucosep tube at room temperature (RT). The tube was then centrifuged for 1 min at 1000 × g. Following this, the tube was filled with anticoagulated blood, and subsequently rinsed with a balanced salt solution (5% sodium citrate in PBS). The blood was diluted at a 1:3 ratio with the balanced salt solution and subjected to centrifugation for 25 min at 800 × g at RT, with the brake off. After gradient centrifugation, the buffy coat was isolated and incubated with rat anti-mouse CD45 microbeads (#130-052-301, Miltenyi Biotech) for 15 min in the refrigeration. The incubated samples were then processed through a MACS LD column (#130-042-901, Miltenyi), and fractions of CD45 − cells were collected for further processing. Subsequently, these CD45 − cells were incubated with rat anti-mouse MSLN antibody (#D233-3, MBL International) for 20 min in the refrigerator, followed by incubation with anti-rat IgG microbeads (#130-048-502, Miltenyi) for 15 min in the refrigerator. The labeled samples were processed by a microfluidic immunomagnetic cell sorting (MICS) device to capture MSLN + populations. Post-capture, the MSLN + CD45 − fraction was resuspended in 500µL of PBS and immediately submitted for scRNA-seq. In some experiments, a small portion of cells, both pre- and post-capture, at each stage were stained with rat anti-mouse CD45 (#550994, BD Bioscience) and human recombinant anti-microbead antibodies (#130-122-219, Miltenyi) for 20 min in the refrigerator. These stained samples were subjected to analysis using an Attune NxT acoustic flow cytometer to evaluate the purity of MSLN + CD45 − cells within the samples. Single cell RNA sequencing (scRNA-Seq) analysis Fresh single cells, encompassing cultured cells (CC), circulating tumor cells (CTC), and total cells from peritoneal lavage (Lav), were prepared as described previously. These cells were subsequently processed by the Princess Margaret Genomic Centre at the University Health Network (UHN) following standard protocols available at www.pmgenomics.ca . Analysis of single-cell gene expression in clusters was conducted utilizing Loupe Cell Browser v5.0.0, provided by 10 × Genomics, as well as CReSCENT: CanceR Single Cell ExpressioN Toolkit, an online platform accessible at https://crescent.cloud/ . CReSCENT is populated with public datasets and preconfigured pipelines that are accessible to computational biology non-experts, and user-editable to allow for optimization, comparison, and re-analysis on the fly. CReSCENT is under an open-source license via Github (General Public License v3.0). 10 × Genomics Chromium v2 was used library preparation. Sequencing is performed on Illumina NextSeq platform. The threshold 3659 per barcode (linear) was selected by unique molecular identifiers (UMIs), and 6% (506/7868) cells were removed. Barcodes with unexpectedly high counts of UMIs may represent multiplets, and barcodes with very few genes may represent low-quality cells or empty droplets, especially those with fewer that 3 are unavailable for reclustering. To set thresholds for mitochondrial UMIs, we selected a reference genome to use pre-selected mitochondrial gene set mouse or human genome (mm10) whose threshold of cells with mitochondrial read percentage was selected 10%. Gene expression counts were log-normalized. This threshold was used to identify potential over-expression of mitochondrial genes, which could indicate poor cell quality or cells undergoing cellular stress or death. Data acquisition and analysis Differential gene expression analysis was conducted based on the predefined threshold criteria, which included a log2 fold change greater than 1 (equivalent to a twofold change) and a p-value less than 0.05. For pathway analysis, total tumor cells obtained from the merged data of the three groups (CC, CTC, and Lav) were identified using tumor cell marker genes such as Msln, Wt1, and Sparc. Subsequently, tumor cell clusters were subjected to reclustering based on sample ID to identify globally distinguishing genes. Further reclustering of tumor cells within each group allowed for the investigation of specific functions within subpopulations. Top up-regulated genes associated with hallmark pathways and gene ontology (GO) annotation terms (BP: biological process; CC: cellular components; MF: molecular function) within each group were analyzed using the GSEA online platform available at https://www.gsea-msigdb.org/gsea (Versions: MSigDB 2024.1; GSEA 4.3.3). Additionally, survival analysis was performed using The Cancer Genome Atlas (TCGA) data. The expression levels of top up-regulated genes were evaluated to determine their association with the prognosis of mesothelioma patients. This analysis was carried out using the TCGA data analysis platform accessible at http://gepia.cancer-pku.cn/ (Version: GEPIA2.0). Hallmark gene sets enrichment analysis Gene set enrichment analysis (GSEA) was conducted to investigate functionally enriched pathways and hallmark gene sets associated with the identified subgroups. The hallmark gene sets utilized in the analysis were obtained from the Molecular Signatures Database (MSigDB), accessible at http://software.broadinstitute.org/gsea/msigdb/ . A significance threshold of p < 0.05 was applied to determine significantly enriched pathways . Particular genes that are involved in hallmark pathways are shown in Sankey plots . For the analysis of cell proliferation, the gene lists from each cluster within the groups were uploaded to the web-based gene set analysis toolkit available at https://www.webgestalt.org/ (WebGestalt V1.0). Mus musculus was selected as the organism to obtain biological process (BP) genes from the Gene Ontology (GO) database. This allowed for the calculation of the percentage of overlaps with cell proliferation within the BP category. For stemness analysis, stem cell type annotation was conducted using the online platform developed by SysBio Lab at the University of Algarve, Portugal, accessible at http://stemchecker.sysbiolab.eu/ . Upon importing the gene list, Mus musculus (Mouse) was selected from the "Checkerboard Options," with masking of both "Mask Cell Proliferation Genes" and "Mask Cell Cycle Genes" to ensure specific focus on stem cell-related genes. The analysis encompassed 25 stemness signatures and 73 transcription factors gene sets. The statistical details table provided significance of enrichment for genes included in composite gene sets associated with different stem cell types among the input genes identified in StemChecker. Composite gene sets for various cell types represent the unions of all selected stemness signatures corresponding to each cell type. Significance (p-value) was calculated via the hypergeometric test, assessing enrichment against the full annotated genome of the selected organism. Additionally, adjusted p-values were calculated using Bonferroni correction to account for multiple comparisons. For EMT enrichment analysis, the up-regulated genes from each group were compared with the EMT hallmark gene set. This gene set, consisting of 200 genes, was downloaded from the Molecular Signatures Database (MSigDB) at https://www.gsea-msigdb.org/gsea/msigdb/cards/HALLMARK_EPITHELIAL_MESENCHYMAL_TRANSITION.html . To analyze the overlaps between any two comparisons, the InteractiVenn platform was utilized. This platform allows for the visualization of shared genes among different gene lists. The InteractiVenn platform is accessible at https://www.interactivenn.net/ . By comparing the up-regulated genes from each group with the EMT hallmark gene set, the analysis focuses on determining the number of genes from each group that may participate in the EMT process. Association of the up-regulated genes with overall survival of MESO patients in TCGA For gene set variation analysis (GSVA) and survival analysis, the Gene Set Cancer Analysis (GSCA) tool estimates the association between GSVA score and overall survival (OS) in MESO. GSVA scores and clinical survival data are merged by sample barcode. Tumor samples are then divided into high and low GSVA score groups based on the median GSVA score. Subsequently, the R package survival is employed to fit the survival time and survival status of the two groups. Cox proportional-hazards model and logrank tests are performed to generate Kaplan–Meier curves for the survival comparison between the high and low GSVA score groups in MESO . It’s important to note that GSVA score represents the variation of gene set activity over a specific cancer sample population in an unsupervised manner. The GSVA score reflects the integrated level of expression of a gene set, and it is positively correlated with the expression of the gene set. Additional information on GSVA score and its interpretation can be found at https://guolab.wchscu.cn/GSCA/#/expression (Version2024). Gene expression correlated with cancer-related pathways in TCGA data Using the same platform ( https://guolab.wchscu.cn/GSCA/#/expression ) and selecting the cancer type MESO, we computed the correlation between GSVA score and cancer-related pathways, as well as immune cell infiltration. The GSVA and pathway activity module presents the correlation between GSVA score and pathway activity, which is defined by pathway scores. This analysis provides insights into the relationship between the expression level of gene sets and the activity of cancer-related pathways. In this analysis, statistical significance is denoted by "*: P value < =0.05" and "#: FDR < =0.05", indicating results with p-values or false discovery rates (FDR) below the specified thresholds. Statistical analysis Statistical analysis was conducted using GraphPad Prism 8.0 (GraphPad Inc., San Diego, CA, USA). For comparisons between two groups, an unpaired two-tailed Student’s t -test was employed. A p -value less than 0.05 was considered statistically significant. Results were presented as mean ± SEM. Significance levels were indicated as follows: *, p < 0.05; **, p < 0.01; ***, p < 0.001 in all figures. For survival analysis comparing overall survival (OS) between low and high-risk groups, Kaplan–Meier analysis was performed with the log-rank test. All tests were two-tailed, and a p-value < 0.05 and/or FDR < 0.05 were considered significant.
The murine mesothelioma cell line RN5 was initially derived from C57BL/6 mice subsequent to asbestos exposure by our research team . These RN5 cells exhibited biphasic morphology. Culturing conditions involved maintenance in RPMI1640 medium supplemented with 10% fetal bovine serum and 1% penicillin–streptomycin at 37 °C in a 5% CO2 atmosphere. To ensure cell line integrity, prophylactic treatment with 5 µg/ml Plasmocin™ (Invivogen) was administered for a minimum of 2 weeks, confirming mycoplasma-free status. For experimental procedures, exponentially growing RN5 cells (approximately 90% confluence) were prepared as follows:1) For scRNA-seq, RN5 cells (2 × 10^6 cells in 500 µl PBS) were submitted; 2) For intraperitoneal ( ip ) injection into 6–8 week-old C57BL/6 mice obtained from the Jackson Laboratories, RN5 cells (2 × 10^6 cells in 200 µl PBS) were administered. Over an 8-week observation period, five mice were sacrificed weekly, with naive mice serving as controls. Total cells were harvested via peritoneal lavage with PBS. Briefly, the peritoneal cavity was exposed and rinsed with 5 ml PBS per mouse, collecting the lavage. Tumor spheroids were removed by filtration through a 40 µm cell strainer (ThermoFisher). Fresh single cells obtained were utilized for scRNA-seq analysis, and 3) Peripheral blood collection occurred at 4 weeks post-tumor cell injection. Upon CO2 inhalation-induced euthanasia, approximately 8 ml of pooled blood was collected from 20 tumor-bearing mice. Enrichment of circulating tumor cells was performed utilizing the obtained blood samples. All experimental protocols were approved by the Committee of Animal Resources Centre, Animal Use Protocol (AUP#3399), University Health Network (UHN). All methods were carried out in accordance with ARRIVE guidelines and regulations. All methods were performed in accordance with the relevant guidelines and regulations in UHN.
The enrichment of MSLN + CD45 − CTC from blood was conducted using a MACS column and a microfluidic chip using a protocol that we previously reported , . In brief, fresh blood collected from tumor-grafted mice underwent gradient centrifugation using the peripheral blood mononuclear cells (PBMC) isolation protocol with Leucosep tubes . In this protocol, 15 ml of Ficoll was added to a 50 ml Leucosep tube at room temperature (RT). The tube was then centrifuged for 1 min at 1000 × g. Following this, the tube was filled with anticoagulated blood, and subsequently rinsed with a balanced salt solution (5% sodium citrate in PBS). The blood was diluted at a 1:3 ratio with the balanced salt solution and subjected to centrifugation for 25 min at 800 × g at RT, with the brake off. After gradient centrifugation, the buffy coat was isolated and incubated with rat anti-mouse CD45 microbeads (#130-052-301, Miltenyi Biotech) for 15 min in the refrigeration. The incubated samples were then processed through a MACS LD column (#130-042-901, Miltenyi), and fractions of CD45 − cells were collected for further processing. Subsequently, these CD45 − cells were incubated with rat anti-mouse MSLN antibody (#D233-3, MBL International) for 20 min in the refrigerator, followed by incubation with anti-rat IgG microbeads (#130-048-502, Miltenyi) for 15 min in the refrigerator. The labeled samples were processed by a microfluidic immunomagnetic cell sorting (MICS) device to capture MSLN + populations. Post-capture, the MSLN + CD45 − fraction was resuspended in 500µL of PBS and immediately submitted for scRNA-seq. In some experiments, a small portion of cells, both pre- and post-capture, at each stage were stained with rat anti-mouse CD45 (#550994, BD Bioscience) and human recombinant anti-microbead antibodies (#130-122-219, Miltenyi) for 20 min in the refrigerator. These stained samples were subjected to analysis using an Attune NxT acoustic flow cytometer to evaluate the purity of MSLN + CD45 − cells within the samples.
Fresh single cells, encompassing cultured cells (CC), circulating tumor cells (CTC), and total cells from peritoneal lavage (Lav), were prepared as described previously. These cells were subsequently processed by the Princess Margaret Genomic Centre at the University Health Network (UHN) following standard protocols available at www.pmgenomics.ca . Analysis of single-cell gene expression in clusters was conducted utilizing Loupe Cell Browser v5.0.0, provided by 10 × Genomics, as well as CReSCENT: CanceR Single Cell ExpressioN Toolkit, an online platform accessible at https://crescent.cloud/ . CReSCENT is populated with public datasets and preconfigured pipelines that are accessible to computational biology non-experts, and user-editable to allow for optimization, comparison, and re-analysis on the fly. CReSCENT is under an open-source license via Github (General Public License v3.0). 10 × Genomics Chromium v2 was used library preparation. Sequencing is performed on Illumina NextSeq platform. The threshold 3659 per barcode (linear) was selected by unique molecular identifiers (UMIs), and 6% (506/7868) cells were removed. Barcodes with unexpectedly high counts of UMIs may represent multiplets, and barcodes with very few genes may represent low-quality cells or empty droplets, especially those with fewer that 3 are unavailable for reclustering. To set thresholds for mitochondrial UMIs, we selected a reference genome to use pre-selected mitochondrial gene set mouse or human genome (mm10) whose threshold of cells with mitochondrial read percentage was selected 10%. Gene expression counts were log-normalized. This threshold was used to identify potential over-expression of mitochondrial genes, which could indicate poor cell quality or cells undergoing cellular stress or death.
Differential gene expression analysis was conducted based on the predefined threshold criteria, which included a log2 fold change greater than 1 (equivalent to a twofold change) and a p-value less than 0.05. For pathway analysis, total tumor cells obtained from the merged data of the three groups (CC, CTC, and Lav) were identified using tumor cell marker genes such as Msln, Wt1, and Sparc. Subsequently, tumor cell clusters were subjected to reclustering based on sample ID to identify globally distinguishing genes. Further reclustering of tumor cells within each group allowed for the investigation of specific functions within subpopulations. Top up-regulated genes associated with hallmark pathways and gene ontology (GO) annotation terms (BP: biological process; CC: cellular components; MF: molecular function) within each group were analyzed using the GSEA online platform available at https://www.gsea-msigdb.org/gsea (Versions: MSigDB 2024.1; GSEA 4.3.3). Additionally, survival analysis was performed using The Cancer Genome Atlas (TCGA) data. The expression levels of top up-regulated genes were evaluated to determine their association with the prognosis of mesothelioma patients. This analysis was carried out using the TCGA data analysis platform accessible at http://gepia.cancer-pku.cn/ (Version: GEPIA2.0).
Gene set enrichment analysis (GSEA) was conducted to investigate functionally enriched pathways and hallmark gene sets associated with the identified subgroups. The hallmark gene sets utilized in the analysis were obtained from the Molecular Signatures Database (MSigDB), accessible at http://software.broadinstitute.org/gsea/msigdb/ . A significance threshold of p < 0.05 was applied to determine significantly enriched pathways . Particular genes that are involved in hallmark pathways are shown in Sankey plots . For the analysis of cell proliferation, the gene lists from each cluster within the groups were uploaded to the web-based gene set analysis toolkit available at https://www.webgestalt.org/ (WebGestalt V1.0). Mus musculus was selected as the organism to obtain biological process (BP) genes from the Gene Ontology (GO) database. This allowed for the calculation of the percentage of overlaps with cell proliferation within the BP category. For stemness analysis, stem cell type annotation was conducted using the online platform developed by SysBio Lab at the University of Algarve, Portugal, accessible at http://stemchecker.sysbiolab.eu/ . Upon importing the gene list, Mus musculus (Mouse) was selected from the "Checkerboard Options," with masking of both "Mask Cell Proliferation Genes" and "Mask Cell Cycle Genes" to ensure specific focus on stem cell-related genes. The analysis encompassed 25 stemness signatures and 73 transcription factors gene sets. The statistical details table provided significance of enrichment for genes included in composite gene sets associated with different stem cell types among the input genes identified in StemChecker. Composite gene sets for various cell types represent the unions of all selected stemness signatures corresponding to each cell type. Significance (p-value) was calculated via the hypergeometric test, assessing enrichment against the full annotated genome of the selected organism. Additionally, adjusted p-values were calculated using Bonferroni correction to account for multiple comparisons. For EMT enrichment analysis, the up-regulated genes from each group were compared with the EMT hallmark gene set. This gene set, consisting of 200 genes, was downloaded from the Molecular Signatures Database (MSigDB) at https://www.gsea-msigdb.org/gsea/msigdb/cards/HALLMARK_EPITHELIAL_MESENCHYMAL_TRANSITION.html . To analyze the overlaps between any two comparisons, the InteractiVenn platform was utilized. This platform allows for the visualization of shared genes among different gene lists. The InteractiVenn platform is accessible at https://www.interactivenn.net/ . By comparing the up-regulated genes from each group with the EMT hallmark gene set, the analysis focuses on determining the number of genes from each group that may participate in the EMT process.
For gene set variation analysis (GSVA) and survival analysis, the Gene Set Cancer Analysis (GSCA) tool estimates the association between GSVA score and overall survival (OS) in MESO. GSVA scores and clinical survival data are merged by sample barcode. Tumor samples are then divided into high and low GSVA score groups based on the median GSVA score. Subsequently, the R package survival is employed to fit the survival time and survival status of the two groups. Cox proportional-hazards model and logrank tests are performed to generate Kaplan–Meier curves for the survival comparison between the high and low GSVA score groups in MESO . It’s important to note that GSVA score represents the variation of gene set activity over a specific cancer sample population in an unsupervised manner. The GSVA score reflects the integrated level of expression of a gene set, and it is positively correlated with the expression of the gene set. Additional information on GSVA score and its interpretation can be found at https://guolab.wchscu.cn/GSCA/#/expression (Version2024).
Using the same platform ( https://guolab.wchscu.cn/GSCA/#/expression ) and selecting the cancer type MESO, we computed the correlation between GSVA score and cancer-related pathways, as well as immune cell infiltration. The GSVA and pathway activity module presents the correlation between GSVA score and pathway activity, which is defined by pathway scores. This analysis provides insights into the relationship between the expression level of gene sets and the activity of cancer-related pathways. In this analysis, statistical significance is denoted by "*: P value < =0.05" and "#: FDR < =0.05", indicating results with p-values or false discovery rates (FDR) below the specified thresholds.
Statistical analysis was conducted using GraphPad Prism 8.0 (GraphPad Inc., San Diego, CA, USA). For comparisons between two groups, an unpaired two-tailed Student’s t -test was employed. A p -value less than 0.05 was considered statistically significant. Results were presented as mean ± SEM. Significance levels were indicated as follows: *, p < 0.05; **, p < 0.01; ***, p < 0.001 in all figures. For survival analysis comparing overall survival (OS) between low and high-risk groups, Kaplan–Meier analysis was performed with the log-rank test. All tests were two-tailed, and a p-value < 0.05 and/or FDR < 0.05 were considered significant.
Supplementary Information. Supplementary Table S1. Supplementary Table S2.
|
Circulating microRNA sequencing revealed miRNome patterns in hematology and oncology patients aiding the prognosis of invasive aspergillosis | 933116cc-b4e3-4c2b-b1a3-0f593a69e262 | 9065123 | Internal Medicine[mh] | Globally, the incidence of fungal infections is evidenced by the worrisome prevalence values of approximately 20 million cases of allergic fungal diseases and more than 1 million cases of invasive fungal infections (IFIs) , . IFIs are associated with dramatic mortality rates, ranging from 20 to 50% despite currently available powerful antifungal agents , . Underscoring the burden of invasive aspergillosis (IA), a marked increase in disease prevalence was observed due to improved diagnostics, an overall escalation in the use of immunosuppressive therapies, and an increased number of organ transplantations performed in recent decades , . IA remains a major issue among patients who have undergone either stem cell or solid organ transplantation, with a prevalence of over 10% – . Considering the impact of the severity of infection, mold specific nucleic acid biomarkers and galactomannan antigen (GM) may prove to be valuable for a timely disease diagnosis. Because of devastating statistics and high mortality rates, new and alternative diagnostic strategies are needed. To diagnose patients with IA in a timely manner, there is a comprehensive need to identify biomarkers with high specificity and sensitivity. Moreover, the application of minimally invasive procedures to obtain nucleic acid targets has become a research trend. Ultimately, biomarkers must be easily detectable with satisfactory positive and negative predictive values and must also discriminate hematology and oncology (HO) patients with or without IA. MicroRNAs (miRNAs) are a class of typically small noncoding RNAs that can regulate gene expression posttranscriptionally through miRNA::mRNA interactions. By mediating the degradation of specific mRNAs, miRNAs reportedly play an important role in the pathogenesis of infectious diseases , . Because of their high diagnostic potential, stable, blood-born miRNAs have been evaluated as potential biomarkers of IFIs. Numerous studies have reported the aberrant expression of several miRNAs in various conditions, including hematological malignancies and bloodstream infections . There is promising evidence that despite the lack of standardized protocols in disease prognosis and current clinical practice, miRNAs constitute a reliable tool for future use . In recent years, extraordinary progress has been made in terms of identifying miRNAs secreted in different body fluids. Cell-free miRNAs are not readily degraded by enzymes and are resistant to changes in temperature, storage, acids and alkalis that might also be exploited in IA . In addition to the major technical difficulties of “liquid biopsy”, standardization is also needed for their successful clinical application . The evaluation of stable miRNA profiles in various biofluid samples is a feasible diagnostic procedure in clinical laboratories. Although previous studies revealed that differentially expressed miRNAs (DEMs) were associated with IFIs, currently, there are no validated prognostic miRNA markers associated with IA – . Unlike SNPs and differential mRNA expressions, miRNAs are scarcely studied in fungal infections while having potential as a future host diagnostic and/or prognostic markers. This study provides a comprehensive dissection and discussion of differentially expressed miRNAs in hematology and oncology patients and thus presents a valuable resource on circulating biomarkers that might be involved in the progression of IA.
Characteristics of the patient cohort In this retrospective study, 50 participants (26 hematology and oncology patients and 24 healthy volunteers) were recruited from two hematology centers in Hungary (the University of Debrecen, Faculty of Medicine, Institute of Internal Medicine, Debrecen, Hungary and Institute of András Jósa County; and the Teaching Hospital, Division of Haematology, Nyíregyháza, Hungary) between May 2017 and November 2020. Participants in the cohort were balanced according to age (mean ± SD: 47.19 ± 13.93 years) but not sex (16 males/10 females). The vast majority of participants suffered from acute lymphoid leukemia (ALL, 53.85%), followed by acute myeloid leukemia (AML, 19.23%), non-Hodgkin lymphoma (NHL, 15.38%), myeloid sarcoma (MS, 7.69%), and chronic lymphocytic leukemia (CLL, 3.85%) (Table ). 17 patients died during the study period. In case of 2 patients, IA was proven post-mortem by periodic acid-Schiff (PAS) staining. In total, 69.23% of the patients suffered from neutropenic fever, defined as a single oral temperature of ≥ 38.3 °C (101 °F) or a temperature of ≥ 38.0 °C (100.4 °F) sustained over a 1 h period, and 72.22% of these patients developed recurrent fever refractory to antibiotic treatment. Sequencing the small RNA transcriptome of the patient cohort The number of mapped cDNA reads was 3,450,028 ± 1,234,556 (75 bp each) per sample, totalling 81,075,658 reads per cDNA library. The majority of the sequences were 21–23 nucleotides long. More than 90% of clean reads were retained after filtering out low-quality tags, removing adaptors and cleaning up contaminants. Small RNA sequence types (represented by uniqueness) and length distribution were analysed. Overall, more than 95% (± 2%) of the clean reads were assigned as miRNAs. Quantitative analysis of the small noncoding RNA transcriptome revealed shared and unique miRNAs In this study, high-throughput small RNA sequencing followed by in silico data analysis was used to detect unique and conserved circulating miRNAs in the study cohort, including healthy controls (n = 24) and HO patients with (HO-proven IA; n = 4, HO-probable IA; n = 3) or without (HO-possible IA; n = 19) IA. In total, 735 miRNAs were omitted from the analysis due to a very low read number (read per million [RPM < 10]) across all samples. We identified 364 miRNAs, with a read number above 10 (RPM > 10). We focused on these in our following analyses. Venn diagram was created to represent the number of miRNAs that were shared (“intersections”) and unique) between different datasets is (Fig. ). Small RNA transcriptome compositions exhibited remarkable differences between our experimental groups (Fig. a). Overall, 190 miRNAs were uniformly present in all experimental groups, representing 19.02% of all identified miRNAs. By considering the global expression level distribution profiles of the common miRNAs, considerable differences were detected when comparing healthy controls to HO patients with or without IA (Fig. b). As shown, IA patient group exhibits remarkable expression changes in several miRNA read numbers. Analyses of the expressed conserved miRNAs revealed that most genes were uniformly up- or downregulated in the non-IA patient group. We also identified unique miRNAs in different experimental groups (Supplementary Fig. ). In total, 21 and 20 miRNAs were present exclusively in healthy and non-aspergillosis HO controls. Based on our data we found 41 miRNAs that were presented in hemato-oncology patients with proven/probable IA. Of these, 21 were present in patients with proven IA (HO-proven), whereas 17 were present in patients with possible IA (HO-probable). DEMs in HO patients with IA Differential expression analysis was performed by retrieving the expressed reads of the 190 conserved miRNAs. Multiple miRNAs showed remarkable differences in expression when comparing HO patients with (HO-proven, HO-probable) or without (HO-possible) IA. Volcano plots were generated to identify the miRNAs showing fold differences with high statistical significance (P values ≤ 0.05) and expressing log 2 -fold changes greater than 1 and lower than − 1 (− 1 > fold change < 1) using the LIMMA statistical model (Fig. ). Based on these criteria, which were considered stringent, we were able to reduce the number of conserved miRNAs to 57. Thereafter, we further identified 21 miRNAs in the IA group, whose miRNA expression profile was significantly different (twofold change with P < 0.05) in comparison to non-IA patients. Hereafter, we identified 36 IA-specific DEMs. Of these DEMs, the expression of 15 was upregulated, and the expression of 21 miRNAs was downregulated. Differential expression analysis of the circulating DEMs led to the identification of distinct clusters The DEM patterns were also clustered to confirm the diagnostic potential of circulating miRNA signatures due to IA disease progression. A hierarchically clustered heatmap was constructed by relating the log 2 -fold change expression values of the 36 DEMs in patients with IA to those in healthy volunteers (Fig. ). Of these miRNAs, 15 (hsa-miR-16-2-3p, hsa-miR-342-5p, hsa-miR-32-5p, hsa-miR-26b-5p, hsa-miR-223-5p, hsa-miR-26a-5p, hsa-miR-625-3p, hsa-let-7a-5p/7c-5p, hsa-miR-92a-3p, hsa-miR-7706, hsa-miR-423-3p, hsa-miR-130b-5p, hsa-miR-423-5p, hsa-let-7b-5p, hsa-miR-486-5p) were significantly upregulated while 21 (hsa-miR-181b-5p, hsa-miR-152-3p, hsa-miR-23a/b-3p, hsa-miR-324-5p, hsa-miR-185-5p, hsa-miR-30a-5p, hsa-miR-130a-3p, hsa-miR-130b-3p, hsa-miR-191-5p, hsa-miR-361-5p, hsa-miR-93-3p, hsa-miR-339-5p, hsa-miR-103a-3p, hsa-miR-15a-5p, hsa-miR-20a-5p, hsa-miR-93-5p, hsa-miR-106a-5p/17-5p, hsa-miR-20b-5p, hsa-miR-221-3p, hsa-miR-106b-5p, hsa-miR-500a-3p) were downregulated due to IA. Three miRNAs (hsa-miR-1976, hsa-miR-423-5p, hsa-let-7b-5p) exhibited inconsistent expression patterns in IA patients. Beta diversity relationships are summarized in two-dimensional multi-dimensional scaling (MDS) scatterplots (Fig. ). Each point represents a sample, and distances between points are representative of differences in DEM expression. Diversity plots were generated to represent the DEM-induced alterations discriminating IA patients from controls, resulting in nonoverlapping clusters (cluster 1 and cluster 2) and representing different spatial ordinations. The MDS plot shows that on the basis of the expression patterns of the IA-related miRNA signatures, it is possible to discriminate patients (HO-proven, and HO-probable IA) from noninfected (HO-possible IA and H, healthy) controls. Validation of the DEMs An essential component of reliable quantitative reverse transcription PCR (qRT-PCR) analyses is the normalization of gene expression data because it controls for variations and allows comparisons of gene expression levels among different samples. An ideal reference gene must be stably expressed, abundant and without any significant variation in its expression status . Due to high heterogeneity, there is no consensus for the best reference gene to be used to normalize miRNA gene expression data in HO patients. In this study, 20 candidate reference genes were investigated to normalize the RT-qPCR data, and their stability was evaluated. On the basis of the overall ranking data, hsa-miR-181a-5p was found to be the most stable, showing the highest stability among the 20 tested miRNAs (Supplementary Fig. ). Of the 62 most abundant DEMs tested, 14 miRNAs were validated successfully by qRT-PCR across our sample groups. The 2-ΔΔCT method was used to quantify the relative fold changes in gene expression in patients (HO-proven and HO-probable vs. HO-possible) relative to healthy controls. To calculate relative changes in gene expression, for each sample, the normalized CT values of single miRNAs were related to the mean CT values measured in healthy controls according to Livak’s 2-ΔΔCT method (Fig. a). Based on these results, we found that the gene expression of 14 miRNAs (hsa-miR-191-5p, hsa-miR-106b-5p, hsa-miR-16-2-3p, hsa-miR-185-5p, hsa-miR-26a-5p, hsa-miR-26b-5p, hsa-miR-106b-3p, hsa-miR-15a-5p, hsa-miR-20a-5p, hsa-miR-20b-5p, hsa-miR-106a-5p, hsa-miR-103a-5p, hsa-miR-93-5p, hsa-miR-17-5p) exhibited significant changes due to IA. To strengthen the congruent gene expression tendencies of small RNA-seq data and qRT-PCR measurements, the normalized read counts (in RPM) of IA patients relative to healthy controls with their density distributions were also determined throughout the IA-infected (HO-proven and HO-probable IA) vs. noninfected (HO-possible IA) hematology and oncology patients (Fig. b). Diagnostic performance of miRNA biomarkers from whole blood To estimate the capabilities of DEMs to discriminate aspergillosis-infected and noninfected patients from whole blood samples, receiver operating characteristic (ROC) curve analyses were applied (Fig. ). On the basis of qRT-PCR-validated gene expression analyses, eight DEMs were found to display high discriminatory power (hsa-miR-191-5p, hsa-miR-106b-5p, hsa-miR-16-2-3p, hsa-miR-26a-5p, hsa-miR-15a-5p, hsa-miR-20a-5p, hsa-miR-106a-5p and hsa-miR-17-5p). All of these miRNAs were downregulated in the IA confirmed group, representing statistically significant fold changes ( P < 0.05) relative to noninfected controls. Five miRNAs (hsa-miR-191-5p, hsa-miR-106b-5p, hsa-miR-15a-5p, hsa-miR-20a-5p, hsa-miR-106a-5p) demonstrated excellent discriminatory power, with AUC values of 1. Three additional miRNAs (hsa-miR-16-2-3p, hsa-miR-26a-5p and hsa-miR-17-5p) displayed AUC values greater than 98%. In addition to examining the distribution of the CT values and the discriminatory power of the miRNAs, normalized CT values for cases (proven and probable IA) and controls (possible IA) were also dichotomized by mapping the sensitivity values in relation to 1-specificity to estimate the optimal cutoff values for these biomarkers. In every case, we also estimated the optimal cutoffs, defined as the points that maximized sensitivity and specificity. Computational prediction reveals genes and biological functions affected by dysregulated miRNAs The biological effects of miRNAs depend on various factors. Predicted interactions were retrieved from the integrated databases. Target recognition refers to the process by which mature miRNAs recognize their complementary mRNA sequences and regulate gene expression. An online webtool algorithm, miRabel, was employed to predict the target genes or biological pathways related to the dysregulated miRNAs considering their evolutionary conservation, Watson–Crick complementarity, and thermodynamic properties between the seed region of the miRNA and its target mRNA . On the basis of in silico data predictions, we generated a list of 55 target genes whose expression might be posttranscriptionally influenced by at least three IA-specific DEMs (Fig. ). Pathway analysis was performed with the KEGG database (Supplementary Fig. ). On the basis of the in silico pathway analyses, twelve relevant biological functions, “cell homeostasis”, “trafficking/vascular transport”, “extracellular matrix (ECM)”, “cell adhesion”, “cell differentiation”, “cell cycle”, “tumorigenesis”, “apoptosis”, “immune response”, “infectious diseases”, “synaptic plasticity” and “catabolic pathways”, were found to be influenced by changes in the IA-affected miRNAs. Of these, tumorigenesis (27 hits), the cell cycle (20 hits), the immune response (17 hits), cell differentiation (14 hits) and apoptosis (13 hits) were the top 5 affected pathways. The associations of these miRNAs with the regulated genes of these pathways were experimentally proven by other previous studies – .
In this retrospective study, 50 participants (26 hematology and oncology patients and 24 healthy volunteers) were recruited from two hematology centers in Hungary (the University of Debrecen, Faculty of Medicine, Institute of Internal Medicine, Debrecen, Hungary and Institute of András Jósa County; and the Teaching Hospital, Division of Haematology, Nyíregyháza, Hungary) between May 2017 and November 2020. Participants in the cohort were balanced according to age (mean ± SD: 47.19 ± 13.93 years) but not sex (16 males/10 females). The vast majority of participants suffered from acute lymphoid leukemia (ALL, 53.85%), followed by acute myeloid leukemia (AML, 19.23%), non-Hodgkin lymphoma (NHL, 15.38%), myeloid sarcoma (MS, 7.69%), and chronic lymphocytic leukemia (CLL, 3.85%) (Table ). 17 patients died during the study period. In case of 2 patients, IA was proven post-mortem by periodic acid-Schiff (PAS) staining. In total, 69.23% of the patients suffered from neutropenic fever, defined as a single oral temperature of ≥ 38.3 °C (101 °F) or a temperature of ≥ 38.0 °C (100.4 °F) sustained over a 1 h period, and 72.22% of these patients developed recurrent fever refractory to antibiotic treatment.
The number of mapped cDNA reads was 3,450,028 ± 1,234,556 (75 bp each) per sample, totalling 81,075,658 reads per cDNA library. The majority of the sequences were 21–23 nucleotides long. More than 90% of clean reads were retained after filtering out low-quality tags, removing adaptors and cleaning up contaminants. Small RNA sequence types (represented by uniqueness) and length distribution were analysed. Overall, more than 95% (± 2%) of the clean reads were assigned as miRNAs.
In this study, high-throughput small RNA sequencing followed by in silico data analysis was used to detect unique and conserved circulating miRNAs in the study cohort, including healthy controls (n = 24) and HO patients with (HO-proven IA; n = 4, HO-probable IA; n = 3) or without (HO-possible IA; n = 19) IA. In total, 735 miRNAs were omitted from the analysis due to a very low read number (read per million [RPM < 10]) across all samples. We identified 364 miRNAs, with a read number above 10 (RPM > 10). We focused on these in our following analyses. Venn diagram was created to represent the number of miRNAs that were shared (“intersections”) and unique) between different datasets is (Fig. ). Small RNA transcriptome compositions exhibited remarkable differences between our experimental groups (Fig. a). Overall, 190 miRNAs were uniformly present in all experimental groups, representing 19.02% of all identified miRNAs. By considering the global expression level distribution profiles of the common miRNAs, considerable differences were detected when comparing healthy controls to HO patients with or without IA (Fig. b). As shown, IA patient group exhibits remarkable expression changes in several miRNA read numbers. Analyses of the expressed conserved miRNAs revealed that most genes were uniformly up- or downregulated in the non-IA patient group. We also identified unique miRNAs in different experimental groups (Supplementary Fig. ). In total, 21 and 20 miRNAs were present exclusively in healthy and non-aspergillosis HO controls. Based on our data we found 41 miRNAs that were presented in hemato-oncology patients with proven/probable IA. Of these, 21 were present in patients with proven IA (HO-proven), whereas 17 were present in patients with possible IA (HO-probable).
Differential expression analysis was performed by retrieving the expressed reads of the 190 conserved miRNAs. Multiple miRNAs showed remarkable differences in expression when comparing HO patients with (HO-proven, HO-probable) or without (HO-possible) IA. Volcano plots were generated to identify the miRNAs showing fold differences with high statistical significance (P values ≤ 0.05) and expressing log 2 -fold changes greater than 1 and lower than − 1 (− 1 > fold change < 1) using the LIMMA statistical model (Fig. ). Based on these criteria, which were considered stringent, we were able to reduce the number of conserved miRNAs to 57. Thereafter, we further identified 21 miRNAs in the IA group, whose miRNA expression profile was significantly different (twofold change with P < 0.05) in comparison to non-IA patients. Hereafter, we identified 36 IA-specific DEMs. Of these DEMs, the expression of 15 was upregulated, and the expression of 21 miRNAs was downregulated.
The DEM patterns were also clustered to confirm the diagnostic potential of circulating miRNA signatures due to IA disease progression. A hierarchically clustered heatmap was constructed by relating the log 2 -fold change expression values of the 36 DEMs in patients with IA to those in healthy volunteers (Fig. ). Of these miRNAs, 15 (hsa-miR-16-2-3p, hsa-miR-342-5p, hsa-miR-32-5p, hsa-miR-26b-5p, hsa-miR-223-5p, hsa-miR-26a-5p, hsa-miR-625-3p, hsa-let-7a-5p/7c-5p, hsa-miR-92a-3p, hsa-miR-7706, hsa-miR-423-3p, hsa-miR-130b-5p, hsa-miR-423-5p, hsa-let-7b-5p, hsa-miR-486-5p) were significantly upregulated while 21 (hsa-miR-181b-5p, hsa-miR-152-3p, hsa-miR-23a/b-3p, hsa-miR-324-5p, hsa-miR-185-5p, hsa-miR-30a-5p, hsa-miR-130a-3p, hsa-miR-130b-3p, hsa-miR-191-5p, hsa-miR-361-5p, hsa-miR-93-3p, hsa-miR-339-5p, hsa-miR-103a-3p, hsa-miR-15a-5p, hsa-miR-20a-5p, hsa-miR-93-5p, hsa-miR-106a-5p/17-5p, hsa-miR-20b-5p, hsa-miR-221-3p, hsa-miR-106b-5p, hsa-miR-500a-3p) were downregulated due to IA. Three miRNAs (hsa-miR-1976, hsa-miR-423-5p, hsa-let-7b-5p) exhibited inconsistent expression patterns in IA patients. Beta diversity relationships are summarized in two-dimensional multi-dimensional scaling (MDS) scatterplots (Fig. ). Each point represents a sample, and distances between points are representative of differences in DEM expression. Diversity plots were generated to represent the DEM-induced alterations discriminating IA patients from controls, resulting in nonoverlapping clusters (cluster 1 and cluster 2) and representing different spatial ordinations. The MDS plot shows that on the basis of the expression patterns of the IA-related miRNA signatures, it is possible to discriminate patients (HO-proven, and HO-probable IA) from noninfected (HO-possible IA and H, healthy) controls.
An essential component of reliable quantitative reverse transcription PCR (qRT-PCR) analyses is the normalization of gene expression data because it controls for variations and allows comparisons of gene expression levels among different samples. An ideal reference gene must be stably expressed, abundant and without any significant variation in its expression status . Due to high heterogeneity, there is no consensus for the best reference gene to be used to normalize miRNA gene expression data in HO patients. In this study, 20 candidate reference genes were investigated to normalize the RT-qPCR data, and their stability was evaluated. On the basis of the overall ranking data, hsa-miR-181a-5p was found to be the most stable, showing the highest stability among the 20 tested miRNAs (Supplementary Fig. ). Of the 62 most abundant DEMs tested, 14 miRNAs were validated successfully by qRT-PCR across our sample groups. The 2-ΔΔCT method was used to quantify the relative fold changes in gene expression in patients (HO-proven and HO-probable vs. HO-possible) relative to healthy controls. To calculate relative changes in gene expression, for each sample, the normalized CT values of single miRNAs were related to the mean CT values measured in healthy controls according to Livak’s 2-ΔΔCT method (Fig. a). Based on these results, we found that the gene expression of 14 miRNAs (hsa-miR-191-5p, hsa-miR-106b-5p, hsa-miR-16-2-3p, hsa-miR-185-5p, hsa-miR-26a-5p, hsa-miR-26b-5p, hsa-miR-106b-3p, hsa-miR-15a-5p, hsa-miR-20a-5p, hsa-miR-20b-5p, hsa-miR-106a-5p, hsa-miR-103a-5p, hsa-miR-93-5p, hsa-miR-17-5p) exhibited significant changes due to IA. To strengthen the congruent gene expression tendencies of small RNA-seq data and qRT-PCR measurements, the normalized read counts (in RPM) of IA patients relative to healthy controls with their density distributions were also determined throughout the IA-infected (HO-proven and HO-probable IA) vs. noninfected (HO-possible IA) hematology and oncology patients (Fig. b).
To estimate the capabilities of DEMs to discriminate aspergillosis-infected and noninfected patients from whole blood samples, receiver operating characteristic (ROC) curve analyses were applied (Fig. ). On the basis of qRT-PCR-validated gene expression analyses, eight DEMs were found to display high discriminatory power (hsa-miR-191-5p, hsa-miR-106b-5p, hsa-miR-16-2-3p, hsa-miR-26a-5p, hsa-miR-15a-5p, hsa-miR-20a-5p, hsa-miR-106a-5p and hsa-miR-17-5p). All of these miRNAs were downregulated in the IA confirmed group, representing statistically significant fold changes ( P < 0.05) relative to noninfected controls. Five miRNAs (hsa-miR-191-5p, hsa-miR-106b-5p, hsa-miR-15a-5p, hsa-miR-20a-5p, hsa-miR-106a-5p) demonstrated excellent discriminatory power, with AUC values of 1. Three additional miRNAs (hsa-miR-16-2-3p, hsa-miR-26a-5p and hsa-miR-17-5p) displayed AUC values greater than 98%. In addition to examining the distribution of the CT values and the discriminatory power of the miRNAs, normalized CT values for cases (proven and probable IA) and controls (possible IA) were also dichotomized by mapping the sensitivity values in relation to 1-specificity to estimate the optimal cutoff values for these biomarkers. In every case, we also estimated the optimal cutoffs, defined as the points that maximized sensitivity and specificity.
The biological effects of miRNAs depend on various factors. Predicted interactions were retrieved from the integrated databases. Target recognition refers to the process by which mature miRNAs recognize their complementary mRNA sequences and regulate gene expression. An online webtool algorithm, miRabel, was employed to predict the target genes or biological pathways related to the dysregulated miRNAs considering their evolutionary conservation, Watson–Crick complementarity, and thermodynamic properties between the seed region of the miRNA and its target mRNA . On the basis of in silico data predictions, we generated a list of 55 target genes whose expression might be posttranscriptionally influenced by at least three IA-specific DEMs (Fig. ). Pathway analysis was performed with the KEGG database (Supplementary Fig. ). On the basis of the in silico pathway analyses, twelve relevant biological functions, “cell homeostasis”, “trafficking/vascular transport”, “extracellular matrix (ECM)”, “cell adhesion”, “cell differentiation”, “cell cycle”, “tumorigenesis”, “apoptosis”, “immune response”, “infectious diseases”, “synaptic plasticity” and “catabolic pathways”, were found to be influenced by changes in the IA-affected miRNAs. Of these, tumorigenesis (27 hits), the cell cycle (20 hits), the immune response (17 hits), cell differentiation (14 hits) and apoptosis (13 hits) were the top 5 affected pathways. The associations of these miRNAs with the regulated genes of these pathways were experimentally proven by other previous studies – .
IFIs are a major cause of mortality in immunosuppressed patients. IA is the most common mold infection in immunocompromised hosts associated with a poor prognosis and high mortality if diagnosis is delayed. Missed diagnoses are encountered when appropriate diagnostic tools are not available, especially in low-income and middle-income areas . Currently, the early detection of IA is very difficult because most patients have nonspecific symptoms, leading to postponement of a correct diagnosis and therapy. The identification of easily accessible, noninvasive, blood-born biomarkers at early stages of disease progression is crucial for the evaluation of high-risk subjects to establish follow-up strategies. Technological advances in high-throughput molecular methods have provided possibilities to detect miRNA expression patterns in different biological samples. Obtaining circulating miRNAs from the blood represents a minimally invasive method for the early detection of disease or to aid in treatment options. The discovery of disease-specific miRNA expression signatures is essential to obtain an accurate diagnosis and to better understand disease pathology. Blood is an easily obtained biofluid that can be used to identify biomarkers . Considering the increasing evidence from the literature showing that the dysregulated expression of miRNAs plays a pivotal role in various infections, we proposed that certain circulating miRNAs may play a significant role in the outcome of IA, suggesting that their relative gene expression levels might also serve as indicators of disease progression , . By performing small RNA sequencing, this study has undertaken a comprehensive exploratory evaluation to establish the full repertoire of circulating miRNAs in whole blood among critically ill patients at high risk of IFIs. Circulating miRNAs were also recently recognized as promising disease biomarkers in infectious diseases, but relatively few studies have examined their role in IA. The regulatory roles of hsa-miR-132-5p and hsa-miR-212-5p have been associated with fungal infections . By considering baseline patient characteristics and underlying malignancies, our primary goal was to decipher aberrant miRNA expression patterns. We hypothesized that by comparing distinct miRNA-seq profiles of shared miRNAs between cases and controls, we can decipher specific prognostic markers that can aid in disease diagnosis. In this study, the most abundant, conserved miRNAs constituted 19.02% of the pool. Differential expression analysis was employed to systematically search the small RNA transcriptome data for a subset of circulating miRNAs representing the most promising combinations of DEMs. Of the potential DEMs, we identified a subset of miRNAs whose expression signatures are unlikely influenced by hematological malignancy but likely to indicators of IA infection. In miRNA-based biofluid analyses, when a continuous variable is considered a diagnostic marker, the method adopted for data normalization and the choice of the reference gene is very important. Using hsa-miR-181a-5p as a reference, we found that dysregulated hsa-miR-191-5p, hsa-miR-106b-5p, hsa-miR-16-2-3p, hsa-miR-26a-5p, hsa-miR-15a-5p, hsa-miR-20a-5p, hsa-miR-106a-5p and hsa-miR-17-5p showed strong discriminatory power, with AUC values greater than 98%. Despite continued progress, target prediction of miRNAs remains a challenge, since aggregated databases often show inconsistent results. To date, approximately 3000 mature human miRNAs have been referenced in miRBase, but several recent studies suggest that there may be a larger number . Furthermore, the bioinformatics identification of miRNA targets remains a challenge because mammalian miRNAs are characterized by poor homology toward their target sequence . Confirmation of the potential biological relevance of these predicted targets is laborious, and it was not the goal of the current project. In relation to IA, the in silico analysis of miRNA-influenced genes suggested an enrichment of pathways associated with tumorigenesis, the cell cycle, the immune response, cell differentation and apoptosis. Interestingly, hsa-miR-16-2-3p was shown to have no influence on these genes, and hsa-miR-191-5p affected only the gene encoding the product of the microtubule-associated protein RP/EB family member 3 ( MAPRE3 ). As a member of the transmembrane protein family, the product of the gene transmembrane protein 100 ( TMEM100 ) was also experimentally proven to be involved in cell differentiation, apoptosis and synaptic plasticity – . Two genes, TMEM100 and MAPRE3 , were epigenetically influenced by five miRNAs, and both were markedly targeted by hsa-miR-17-5p ( TMEM100 miRabel score: 0.00056, MAPRE3 miRabel score: 0.00069), hsa-miR-20a-5p ( TMEM100 miRabel score: 0.00048, MAPRE3 miRabel score: 0.0012), and hsa-miR-106b-5p ( TMEM100 miRabel score: 0.00036, MAPRE3 miRabel score: 0.00108) and weakly targeted by hsa-miR-106a-5p ( TMEM100 miRabel score: 0.0485, MAPRE3 miRabel score: 0.0488). Previous studies have also implied a direct link between TMEM100 and miR-106b-5p related to tumorigenesis – . Based on our data, dysregulated hsa-miR-17-5p, hsa-miR-20a-5p and hsa-miR-106b-5p target the signal transducer and activator of transcription 3 ( STAT3 ) gene in HO-IA patients. The STAT3 gene encoding the transcription factor, which is a member of the STAT protein has also been proven to play an important regulatory role in both bacterial and fungal infectious diseases , . A defect in the IFN-γ response in STAT3 -deficient patients has already been proven upon stimulation with heat-killed Staphylococcus aureus and Candida albicans , . In addition, the involvement of the tyrosine protein phosphatase nonreceptor type 4 protein, encoded by the PTPN4 gene, in infectious diseases was also proven that also plays a role in immunity and cell homeostasis – . We found that the PTPN4 , STAT3 and RAP2C genes were the main targets with important roles in relevant biological processes. In humans, loss-of-function mutations of the STAT3 gene are frequently associated with susceptibility to bacterial as well as fungal infections . Francois Danion and colleagues proved that STAT3 -deficient patients with aspergillosis were associated with a defective adaptive immune response against A. fumigatus infection and produced lower levels of cytokines, including IFN-γ, IL-17, and IL-22 . Based on their estimations, one major protective host mechanism against A. fumigatus infection is via IFN-γ. Furthermore, a recent study showed that the majority of lung-derived T cells upon A. fumigatus infection were Th17 cells, suggesting that the decreased production of Th1 and Th17 cytokines in STAT3 -deficient patients could be the reason for their susceptibility to A. fumigatus , . The tumor suppressor protein encoding TMEM100 gene was found to be targeted by five IA-related miRNA biomarkers; hsa-miR-15a-5p, hsa-miR-17-5p, hsa-miR-20a-5p and hsa-miR-106a/b-5p. The fact that all of the miRNAs targeting TMEM100 have shown significant changes in gene expression in HO patients with aspergillosis also suggests its involvement in both potentially oncogenic and infection-related biological pathways . Interestingly, in previous studies, the regulatory roles of some of these miRNAs were associated with infectious mycobacterial disorders. By binding to the 3’-untranslated region of cathepsin S (CtsS) mRNA, hsa-miR-106b-5p was found to be involved in the posttranscriptional gene regulation of CtsS during mycobacterial infection . Additionally, the involvement of miR-26a-5p was defined upon Mycobacterium tuberculosis infection by targeting the IFNγ signaling cascade , . Finally, by targeting STAT3 , the involvement of hsa-miR-17-5p in the regulation of tuberculosis-induced autophagy in macrophages was also proven . The experimental design of this study led us to decipher complex miRNA signatures associated with IA by integrating small RNA sequencing and multiple bioinformatics tools. A miRNA::mRNA regulatory network was also constructed to investigate relevant downstream molecular mechanisms of the predicted targeted genes of the captured miRNAs. To our knowledge, this is the first effort to understand the levels of blood-born, circulating miRNAs per IA to identify stable, abundant disease-specific biomarkers. Our results suggest that some DEMs have the potential to serve as good and abundant blood-born biomarkers for IA. Our data may also lead to a better understanding disease pathogenesis and provide insight into the complexity and diversity of small RNA molecules that regulate immunodeficient IA.
Regarding its incidence, IA can be considered a rare disorder. Based on epidemiological data on IA the estimated occurrence of IA is 5–13% in HSCT recipients and 10–20% in patients receiving intensive chemotherapy for leukemia – . In our study, disease prevalence exceeded 25% which might be explained by the relatively small hemato-oncology population size (HO-proven/probable IA). Due to the imbalance and limited size of the study cohort, this study may be considered exploratory. For a higher level of confidence, differential expression of the miRNome should be studied in an extended cohort by recruiting patients from a more diverse HO population. Therefore, validation of the results in an extended population with a broader range of patients is needed. There is a lack of standardized protocols for miRNA extraction or quality and quantity assessment either. Furthermore, due to the high levels of endogenous ribonuclease activity and low RNA content quantity of circulating miRNAs seem to vary widely between commercially available kits . Because of the poor RNA yield many profiling methods are using total RNA. Furthermore, nanospectrophotometry is highly sensitive for low RNA concentration, resulting to poor quality criteria. It also needs to be considered, that many miRNAs reported as circulating cancer biomarkers reflect a secondary effect on blood cells rather than a tumor cell-specific origin . The fact, that circulating miRNAs are influenced by blood cell counts and hemolysis, establishing a correct and optimal miRNA extraction is crucial for biomarker studies. While of major interest for future biomarker development, this study presents a retrospective evaluation of our patient cohort, and no prospective validation of the identified miRNAs in independent cohorts has been performed. Therefore, for future studies of circulating miRNA biomarkers that are expressed in blood cells, miRNA expression levels should also be interpreted in light of blood cell counts.
The most recent advances in the diagnosis of invasive fungal diseases indicate miRNAs. However, the number of patients at risk of IA is increasing globally, and data on disease-specific circulating miRNAs are scant. Microbiological laboratories still struggle to achieve a timely and adequate diagnosis. Numerous scientists tend to identify biomarkers that could help in the early diagnosis of IA. Therefore, the discovery of specific predisposing factors is essential to obtain an accurate diagnosis and a better understanding of disease pathophysiology. As circulating miRNAs are promising biomarkers for various diseases, in this study, we analyzed the small RNA transcriptomes of HO patients and healthy controls through next-generation sequencing to reveal IA-specific miRNA expression patterns. The identification of IA-specific miRNA signatures might also be essential for the elucidation of disease pathophysiology.
Patient population This retrospective case–control study was performed from May 2017 to November 2020 and involved two hematology centers in Hungary: the University of Debrecen, Faculty of Medicine, Institute of Internal Medicine, Debrecen, Hungary; and the Institute of András Jósa County and Teaching Hospital, Division of Hematology, Nyíregyháza, Hungary. The patient population comprised 26 adults: 16 males, with a median age of 63 (range 33–71) years, and 10 females, with a median age of 40 (range 25–52) years, with different hematological malignancies (mainly acute leukemia: 73.08%) receiving stem cell transplantation and intensive chemotherapy (neutrophil count < 0.5 × 109 cells/L) (Table ). Patients who developed neutropenic fever (NF) (temperature > 38 °C of fever recorded twice or > 38.5 °C recorded once) were recruited. Children aged < 17 years were excluded from the study. Twenty-four healthy controls with no previous history of hematological and oncological diseases were also included [median age: 36 years (range 25–52)]. Stratification of episodes Patients were retrospectively stratified as follows using standard criteria according to the revised European Organization for the Research and Treatment of Cancer/Mycosis Study Group (EORTC/MSG) : proven IA—4 patients (15.38%), probable IA—3 patients (11.54%), and possible IA—19 patients (73.08%). RNA extraction, quantification and quality control Whole blood was drawn from patients and collected into EDTA-coated tubes for microRNA analyses. Analyses were carried out in a class II laminar-flow cabinet to avoid environmental contamination. Total RNA was extracted using a miRNeasy Serum/Plasma Kit (Qiagen, Hilden, Germany). RNA extraction was performed on 250 μl of whole blood according to the manufacturer’s instructions. A no template control (NTC) of nuclease-free water was purified with the samples. RNA quantity was measured in each sample using fluorometric quantification (Qubit™ 4 Fluorometer, Thermo Fisher Scientific, USA) with a Qubit miRNA Assay Kit (Q32881, Invitrogen by Thermo Fisher Scientific, USA). The RNA integrity number (RIN) and RNA quality were measured using two different methods: spectrophotometry (NanoDrop™ 2000 Spectrophotometer, Thermo Scientific) and automated electrophoresis with Agilent 4200 Tapestation System (G2991A, Agilent Technologies, USA) using RNA ScreenTape (5067–5576, Agilent Technologies, USA) and RNA ScreenTape Buffer (5067–5577, Agilent Technologies, USA). For all samples, the RIN value was above 5. After RNA quality control, the purified RNA samples were stored at − 80 °C. Library preparation and sequencing Libraries for small RNA sequencing were prepared using a NEBNext® Small RNA Library Prep Set for Illumina ® (New England Biolabs Inc., United Kingdom) following the manufacturer’s instructions. Two sequencing runs were performed, samples were divided to batches in a random manner and both runs contained samples from all study groups in order to address batch effects (Supplementary Fig. ). Six microliters of 500 ng total RNA was used as the starting material to prepare the libraries. Multiplex adapter ligations (using 3′ and 5′ SR adaptors), reverse transcription primer hybridization, reverse transcription reactions and PCR amplifications were performed as described in the protocol. After PCR amplification, the cDNA constructs were purified with a QIAQuick PCR Purification Kit (28104, Qiagen, Hilden, Germany) and MagSI-NGS PREP Plus beads (MDKT00010075, magtivio BV, The Netherlands) following the modifications suggested in the NEBNext Multiplex Small RNA Library Prep protocol. Size selection of the amplified cDNA constructs was performed using E-Gel® EX 2% Agarose (G401002, Invitrogen by Thermo Fisher Scientific, Israel) with an E-Gel™ Power Snap Electrophoresis Device (G8100, Invitrogen by Thermo Fisher Scientific, Singapore) following the manufacturer’s protocol. The 150 nt bands correspond to adapter-ligated constructs derived from RNA fragments of 21 to 30 nt in length. An agarose slice was excised from the gel, melted, and purified using a QIAQuick Gel Extraction Kit (28,704, Qiagen, Hilden, Germany) following the manufacturer’s recommended protocol. The purified cDNA libraries were checked on Agilent 4200 Tapestation System using D1000 ScreenTape (5067–5582, Agilent Technologies, USA) and D1000 Sample Buffer (5067–5602, Agilent Technologies, USA). All libraries were adjusted to a concentration of 4 nM using 10 mM Tris (pH 8.5) as the diluent and pooled in the same proportion. Thereafter, libraries were denatured with 0.2 N NaOH. A standard 1% PhiX Control Library (Illumina, USA) was also denatured and used as an internal control. Finally, the libraries and PhiX control were sequenced on an Illumina NextSeq 550 Sequencing System (Illumina, USA) with read lengths of 75 base pairs and 3.5 million single-end reads per sample, on average. Bioinformatic and statistical analyses Sample preprocessing and determining DEMs The demultiplexed library was checked for residual adapter sequences with Cutadapt software, and AGATCGGAAGAGCACACGTCTGAACTCCAGTCAC query sequences were filtered . Read qualities were assessed using the FastQC program. We summarized the sequencing quality across samples grouped by batches in order to detect outliers with poor quality (Supplementary Fig. ). Additional trimming was performed with Trimmomatic (4:20 sliding window parameter) . miRNA annotation was performed with miRge 2.0 software . Sequencing reads were divided into two partitions with a target read length threshold of 28 bases. For the lower portion (< 28 bases) annotation reports showed that circa 95% of the reads were assigned miRNAs, while for the upper part of the reads (> 28 bases) no miRNAs were detected. Differential expression analysis was performed with the edgeR R package. Libraries in the program were normalized by trimmed mean of M values (TMM). Volcano plots from the edgeR result were generated using the EnhancedVolcano R package . Statistical comparisons among groups were also checked with nonparametric Kruskal–Wallis test where sequencing read numbers were converted to RPM (reads per million reads) in order to normalize libraries. P values were adjusted with Benjamini–Hochberg method, and P < 0.05 was determined as significant difference. Additionally, clustermap was generated in Python (ver3.6.14) with the seaborn package (0.11.1) , where dendograms were also created with hierarchical agglomerative clustering. Diagnostic performances of the DEMs The diagnostic values of the preselected miRNA biomarkers were measured by easyROC, a web-based tool for ROC curve analysis . The ROC curve was edited by plotting the true positive rates (sensitivity values on the y-axis) versus the false positive rates (1-specificity values on the x-axis). The area under the ROC curve (AUC) was also calculated and used as an accuracy index to evaluate the diagnostic performances of the selected miRNAs. Target and pathway prediction miRabel , a miRNA target prediction tool, was used to determine the gene targets of the 7 selected miRNAs. For every miRNA, the top 100 hits were chosen according to the generated miRabel scores. Pathway analysis was carried out with the Kyoto Encyclopedia of Genes and Genomes (KEGG) database – . Validation of miR-seq data by qRT-PCR Total RNA (1.5 ng) was used for miRNA-specific reverse transcription using a TaqMan™ Advanced miRNA cDNA Synthesis Kit (Thermo Fisher Scientific, USA). Quantitative real-time PCR with 62 TaqMan™ Gene Expression Assays (Thermo Fisher Scientific, USA) was performed to detect miRNA expression profiles in 3 independent technical repeats, including negative controls (no template from RNA isolation and reverse transcription), using a LightCycler ® 480 Real-Time PCR System (Roche Diagnostics, Risch-Rotkreuz Switzerland). PCR conditions were as follows: 20 s at 95 °C, 50 cycles of 3 s at 95 °C and 30 s at 60 °C followed by 1 cycle of 3 min at 37 °C. To identify a stable endogenous miRNA control in whole blood samples from healthy controls and study participants, twenty candidate miRNAs were selected by RefFinder . Among the 20 reference miRNAs, hsa-miR-181a-5p was the most stable and used for normalization. Postmortem histology The specificity of Aspergillus infection morphology via PAS staining was addressed because open lung biopsy was performed via postmortem thoracotomy . Histological samples were taken from the major organs according to a standard protocol. Lung sampling was performed from three independent parts of the potentially infiltrated lung parenchyma. Ethical statement The study protocol was approved by the Ethics Committee of the University Hospitals of Debrecen, Hungary (MK-JA/50/0096-01/2017) and carried out in accordance with the approved guidelines. Informed consent was obtained from the participants in the study.
This retrospective case–control study was performed from May 2017 to November 2020 and involved two hematology centers in Hungary: the University of Debrecen, Faculty of Medicine, Institute of Internal Medicine, Debrecen, Hungary; and the Institute of András Jósa County and Teaching Hospital, Division of Hematology, Nyíregyháza, Hungary. The patient population comprised 26 adults: 16 males, with a median age of 63 (range 33–71) years, and 10 females, with a median age of 40 (range 25–52) years, with different hematological malignancies (mainly acute leukemia: 73.08%) receiving stem cell transplantation and intensive chemotherapy (neutrophil count < 0.5 × 109 cells/L) (Table ). Patients who developed neutropenic fever (NF) (temperature > 38 °C of fever recorded twice or > 38.5 °C recorded once) were recruited. Children aged < 17 years were excluded from the study. Twenty-four healthy controls with no previous history of hematological and oncological diseases were also included [median age: 36 years (range 25–52)].
Patients were retrospectively stratified as follows using standard criteria according to the revised European Organization for the Research and Treatment of Cancer/Mycosis Study Group (EORTC/MSG) : proven IA—4 patients (15.38%), probable IA—3 patients (11.54%), and possible IA—19 patients (73.08%).
Whole blood was drawn from patients and collected into EDTA-coated tubes for microRNA analyses. Analyses were carried out in a class II laminar-flow cabinet to avoid environmental contamination. Total RNA was extracted using a miRNeasy Serum/Plasma Kit (Qiagen, Hilden, Germany). RNA extraction was performed on 250 μl of whole blood according to the manufacturer’s instructions. A no template control (NTC) of nuclease-free water was purified with the samples. RNA quantity was measured in each sample using fluorometric quantification (Qubit™ 4 Fluorometer, Thermo Fisher Scientific, USA) with a Qubit miRNA Assay Kit (Q32881, Invitrogen by Thermo Fisher Scientific, USA). The RNA integrity number (RIN) and RNA quality were measured using two different methods: spectrophotometry (NanoDrop™ 2000 Spectrophotometer, Thermo Scientific) and automated electrophoresis with Agilent 4200 Tapestation System (G2991A, Agilent Technologies, USA) using RNA ScreenTape (5067–5576, Agilent Technologies, USA) and RNA ScreenTape Buffer (5067–5577, Agilent Technologies, USA). For all samples, the RIN value was above 5. After RNA quality control, the purified RNA samples were stored at − 80 °C.
Libraries for small RNA sequencing were prepared using a NEBNext® Small RNA Library Prep Set for Illumina ® (New England Biolabs Inc., United Kingdom) following the manufacturer’s instructions. Two sequencing runs were performed, samples were divided to batches in a random manner and both runs contained samples from all study groups in order to address batch effects (Supplementary Fig. ). Six microliters of 500 ng total RNA was used as the starting material to prepare the libraries. Multiplex adapter ligations (using 3′ and 5′ SR adaptors), reverse transcription primer hybridization, reverse transcription reactions and PCR amplifications were performed as described in the protocol. After PCR amplification, the cDNA constructs were purified with a QIAQuick PCR Purification Kit (28104, Qiagen, Hilden, Germany) and MagSI-NGS PREP Plus beads (MDKT00010075, magtivio BV, The Netherlands) following the modifications suggested in the NEBNext Multiplex Small RNA Library Prep protocol. Size selection of the amplified cDNA constructs was performed using E-Gel® EX 2% Agarose (G401002, Invitrogen by Thermo Fisher Scientific, Israel) with an E-Gel™ Power Snap Electrophoresis Device (G8100, Invitrogen by Thermo Fisher Scientific, Singapore) following the manufacturer’s protocol. The 150 nt bands correspond to adapter-ligated constructs derived from RNA fragments of 21 to 30 nt in length. An agarose slice was excised from the gel, melted, and purified using a QIAQuick Gel Extraction Kit (28,704, Qiagen, Hilden, Germany) following the manufacturer’s recommended protocol. The purified cDNA libraries were checked on Agilent 4200 Tapestation System using D1000 ScreenTape (5067–5582, Agilent Technologies, USA) and D1000 Sample Buffer (5067–5602, Agilent Technologies, USA). All libraries were adjusted to a concentration of 4 nM using 10 mM Tris (pH 8.5) as the diluent and pooled in the same proportion. Thereafter, libraries were denatured with 0.2 N NaOH. A standard 1% PhiX Control Library (Illumina, USA) was also denatured and used as an internal control. Finally, the libraries and PhiX control were sequenced on an Illumina NextSeq 550 Sequencing System (Illumina, USA) with read lengths of 75 base pairs and 3.5 million single-end reads per sample, on average.
Sample preprocessing and determining DEMs The demultiplexed library was checked for residual adapter sequences with Cutadapt software, and AGATCGGAAGAGCACACGTCTGAACTCCAGTCAC query sequences were filtered . Read qualities were assessed using the FastQC program. We summarized the sequencing quality across samples grouped by batches in order to detect outliers with poor quality (Supplementary Fig. ). Additional trimming was performed with Trimmomatic (4:20 sliding window parameter) . miRNA annotation was performed with miRge 2.0 software . Sequencing reads were divided into two partitions with a target read length threshold of 28 bases. For the lower portion (< 28 bases) annotation reports showed that circa 95% of the reads were assigned miRNAs, while for the upper part of the reads (> 28 bases) no miRNAs were detected. Differential expression analysis was performed with the edgeR R package. Libraries in the program were normalized by trimmed mean of M values (TMM). Volcano plots from the edgeR result were generated using the EnhancedVolcano R package . Statistical comparisons among groups were also checked with nonparametric Kruskal–Wallis test where sequencing read numbers were converted to RPM (reads per million reads) in order to normalize libraries. P values were adjusted with Benjamini–Hochberg method, and P < 0.05 was determined as significant difference. Additionally, clustermap was generated in Python (ver3.6.14) with the seaborn package (0.11.1) , where dendograms were also created with hierarchical agglomerative clustering. Diagnostic performances of the DEMs The diagnostic values of the preselected miRNA biomarkers were measured by easyROC, a web-based tool for ROC curve analysis . The ROC curve was edited by plotting the true positive rates (sensitivity values on the y-axis) versus the false positive rates (1-specificity values on the x-axis). The area under the ROC curve (AUC) was also calculated and used as an accuracy index to evaluate the diagnostic performances of the selected miRNAs.
The demultiplexed library was checked for residual adapter sequences with Cutadapt software, and AGATCGGAAGAGCACACGTCTGAACTCCAGTCAC query sequences were filtered . Read qualities were assessed using the FastQC program. We summarized the sequencing quality across samples grouped by batches in order to detect outliers with poor quality (Supplementary Fig. ). Additional trimming was performed with Trimmomatic (4:20 sliding window parameter) . miRNA annotation was performed with miRge 2.0 software . Sequencing reads were divided into two partitions with a target read length threshold of 28 bases. For the lower portion (< 28 bases) annotation reports showed that circa 95% of the reads were assigned miRNAs, while for the upper part of the reads (> 28 bases) no miRNAs were detected. Differential expression analysis was performed with the edgeR R package. Libraries in the program were normalized by trimmed mean of M values (TMM). Volcano plots from the edgeR result were generated using the EnhancedVolcano R package . Statistical comparisons among groups were also checked with nonparametric Kruskal–Wallis test where sequencing read numbers were converted to RPM (reads per million reads) in order to normalize libraries. P values were adjusted with Benjamini–Hochberg method, and P < 0.05 was determined as significant difference. Additionally, clustermap was generated in Python (ver3.6.14) with the seaborn package (0.11.1) , where dendograms were also created with hierarchical agglomerative clustering.
The diagnostic values of the preselected miRNA biomarkers were measured by easyROC, a web-based tool for ROC curve analysis . The ROC curve was edited by plotting the true positive rates (sensitivity values on the y-axis) versus the false positive rates (1-specificity values on the x-axis). The area under the ROC curve (AUC) was also calculated and used as an accuracy index to evaluate the diagnostic performances of the selected miRNAs.
miRabel , a miRNA target prediction tool, was used to determine the gene targets of the 7 selected miRNAs. For every miRNA, the top 100 hits were chosen according to the generated miRabel scores. Pathway analysis was carried out with the Kyoto Encyclopedia of Genes and Genomes (KEGG) database – .
Total RNA (1.5 ng) was used for miRNA-specific reverse transcription using a TaqMan™ Advanced miRNA cDNA Synthesis Kit (Thermo Fisher Scientific, USA). Quantitative real-time PCR with 62 TaqMan™ Gene Expression Assays (Thermo Fisher Scientific, USA) was performed to detect miRNA expression profiles in 3 independent technical repeats, including negative controls (no template from RNA isolation and reverse transcription), using a LightCycler ® 480 Real-Time PCR System (Roche Diagnostics, Risch-Rotkreuz Switzerland). PCR conditions were as follows: 20 s at 95 °C, 50 cycles of 3 s at 95 °C and 30 s at 60 °C followed by 1 cycle of 3 min at 37 °C. To identify a stable endogenous miRNA control in whole blood samples from healthy controls and study participants, twenty candidate miRNAs were selected by RefFinder . Among the 20 reference miRNAs, hsa-miR-181a-5p was the most stable and used for normalization.
The specificity of Aspergillus infection morphology via PAS staining was addressed because open lung biopsy was performed via postmortem thoracotomy . Histological samples were taken from the major organs according to a standard protocol. Lung sampling was performed from three independent parts of the potentially infiltrated lung parenchyma.
The study protocol was approved by the Ethics Committee of the University Hospitals of Debrecen, Hungary (MK-JA/50/0096-01/2017) and carried out in accordance with the approved guidelines. Informed consent was obtained from the participants in the study.
Supplementary Figure 1. Supplementary Figure 2. Supplementary Figure 3. Supplementary Figure 4.
|
Benchmarking pharmacogenomics genotyping tools: Performance analysis on short‐read sequencing samples and depth‐dependent evaluation | c2adbc8c-7437-4c4d-af1a-5b7c5e27ddcc | 11315677 | Pharmacology[mh] | Pharmacogenomics (PGx) is a field that studies how genetic variations in genes (pharmacogenes) influence drug metabolism, aiming to modify treatments based on an individual's germline DNA. Due to significant inter‐individual variability, a dose that is effective for one person may be sub‐therapeutic for another. A key factor in medication metabolism is the cytochrome P450 (CYP) enzymes, which are subject to genetic polymorphisms within their genes. These polymorphisms can significantly alter enzyme functionality, leading to variations in metabolism activity, either reducing or increasing it. Microarrays have been widely used to identify variants in pharmacogenes. However, despite next‐generation sequencing becoming the standard in clinical diagnostics, it is not routinely used in clinical practice for pharmacogenomics. Sequencing approaches, such as whole genome sequencing (WGS), allows the detection of single‐nucleotide polymorphisms (SNPs) with high accuracy, enabling not only to interrogate known SNPs but also to identify novel variants, in contrast to microarrays, which can only detect predetermined SNPs. Several publicly available PGx software tools have been developed for genotyping pharmacogenes from short‐read WGS data, including Aldy, Astrolabe, Cyrius, PharmCAT, Stargazer and StellarPGx. The performance of PGx tools has been evaluated by the authors of PGx tools when comparing the developed software with the other available software, , , , and in one comparison, the impact of higher sequencing depths (60× and 100×) was also investigated. Interestingly, despite many similarities in the outcomes of various studies, there were some notable discrepancies. For instance, while Stargazer achieved a 100% concordance in genotyping CYP3A5 in one comparison, another one reported only 65.7%. In this independent study, we aim to evaluate the latest versions of the main PGx computational tools (Aldy, Stargazer, StellarPGx, and Cyrius) using a publicly available reference WGS dataset consisting of samples from four superpopulations (38.6% Europeans, 30.0% Africans, 27.1% East Asians, and 2.9% Admixed Americans; unknown for 1.4%). We assess the call rate of tools and concordance with the ground truth for six genes that have multilaboratory consensus results available and are supported by the tools in our study, specifically CYP2D6 , CYP2C9, CYP2C19, CYP3A5, CYP2B6 , and TPMT . While Aldy, Stargazer, and StellarPGx have multigene support covering all these genes, Cyrius is specifically designed for the complex CYP2D6 and does not assess other genes. Given that all tools now support the GRCh38 assembly, which has become a standard in clinical research, we will mainly use this reference genome to assess their performance. Additionally, we will align samples on the older assembly (GRCh37) and use different aligners (BWA and Bowtie2) to determine any effect on the downstream analysis. Finally, although these tools have been primarily assessed at their original coverage depth of around 30–40×, and in one study also at higher depths of 60× and 100×, we aim to evaluate their performance at lower depths, including 30×, 20×, 10×, and 5×. This will offer valuable insights regarding the use of any of the benchmarked PGx tools on datasets with lower depth coverage (<30×) or those planning to use methods, such as low‐coverage WGS sequencing, in PGx research. Seventy PCR‐free Illumina WGS FASTQ files (150 bp paired‐end Illumina HiSeq X) from the Genetic Testing Reference Material Program (GeT‐RM) were downloaded from the European Nucleotide Archive (project ID: PRJEB19931). The integrity of compressed files was confirmed by calculating md5 hash of downloaded files and comparing it with the one stated in the project's database. FASTQ files were aligned to the GRCh38 and separately to the GRCh37 reference genome using BWA‐MEM, followed by sorting and indexing with Samtools. Similarly, FASTQ files were separately aligned to the GRCh38 with Bowtie2. The average depth of each BAM file was determined using Samtools depth. Subsequently, samples aligned on the GRCh38 with BWA‐MEM were downsampled using GATK DownsampleSam, applying a ratio to achieve target depths of 30×, 20×, 10×, or 5×, based on the calculated average depth. Diplotypes were called using Aldy v4.5, Cyrius v1.1.1, Stargazer v2.0.2, and StellarPGx v1.2.7. A frequently used tool PharmCAT was not included in the evaluation because it depends on external callers, such as StellarPGx or Stargazer for genotyping CYP2D6 which was a focus of this study. All tools were executed using their default settings. Given that Stargazer requires a variant call format (VCF) file for input, this was generated using GATK HaplotypeCaller, which was run on the sample to call variants within a predefined list of pharmacogenes (regions based on the ones defined in Stargazer's program, and merged with Cyrius regions). The VDR gene was used as a control gene for Stargazer. Commands used to prepare reference genome, call variants, and run tools are provided in Text . Ground truth was acquired from datasets published for CYP2D6 , , CYP2C9 , CYP2C19 , , CYP3A5 , , CYP2B6 , and TPMT . , An adjustment to the truth dataset was made based on recent literature where the CYP2D6 truth diplotype for NA18519 was updated from *1/*29 to *106/*29. , , Additionally, for NA18540, where the CYP2D6 truth is defined as (*36+)10/*41, we also considered results correct if there was more than one copy of *36, thereby defining the ground truth as (*36( xN )+)*10/*41, for which evidence has previously been shown. All calls were compared with the ground truth (major alleles) and in instances where the truth dataset presented multiple different haplotype possibilities due to variations in laboratory results, all options were considered as correct if identified by the tool. When tools reported two possible diplotypes, the first reported solution was chosen for comparison. This was not applied to Stargazer, which unlike other tools, did not report a second diplotype but provided a list of candidate haplotypes. For calculating consensus results, in rare instances where Cyrius, StellarPGx, or Aldy reported two possible diplotypes, both were included in the pool of potential diplotypes to reach a consensus. Individual calls and results are provided in Tables . Performance of tools on GeT‐RM samples Alignment on GRCh38 with BWA‐MEM First, all WGS samples were aligned on the GRCh38 reference assembly by using the BWA‐MEM algorithm. The mean depth across the genome was determined as 39.7, with a standard deviation of 2.73 (median: 40×). The ground truth diplotypes were compared with the calls from individual tools, as well as with consensus results obtained from combinations of two and three tools. Rarely, two possible solutions were provided: once by Cyrius for CYP2D6 and five times by StellarPGx for CYP2B6 . For the latter, the first solution matched the ground truth in three instances, while in two instances, neither solution matched. Stargazer often provided a list of other possible haplotypes, sometimes including a lengthy list of up to 10 items. As presented in Table , Aldy, StellarPGx, and Stargazer demonstrated strong performance in genotyping CYP2C19 , CYP2C9 , CYP3A5 , and TPMT , incorrectly identifying only a maximum of one sample. For CYP2B6 , concordance rates were lower and were also similar across the tools, varying between 85.7% and 87.1%. Focusing on the CYP2D6 gene, greater variability was observed between the different tools. Specifically, Cyrius incorrectly genotyped 2 samples and failed to provide results for 3 others. StellarPGx, Aldy, and Stargazer made incorrect calls on 4, 6, and 11 samples, respectively. All tools were wrongly called CYP2D6 in NA18565, while only Cyrius was able to correctly determine all haplotypes, albeit with incorrect phasing. Additionally, samples NA21781 and NA18540 were incorrectly genotyped by three tools, but were correctly identified by Stargazer and Cyrius, respectively. Meanwhile, the remaining 10 samples with wrong calls were identified incorrectly by either one or two tools. Notably, Stargazer exhibited more genotyping inaccuracies compared with the ground truth across various samples, including reporting of a rare *122 haplotype for four samples, instead of the actual *1. One of these samples was also misidentified by Aldy as *122. The alignment process was conducted again for all 15 samples with incorrect CYP2D6 diplotypes (or those with no call) from any tool to eliminate the possibility of incomplete alignments. We also experimented with aligning those samples using BWA‐MEM with and without the “‐M” parameter (either a split read is flagged as duplicate or as supplementary alignment, respectively). Separately, we applied post‐processing by marking and removing duplicate reads as well as performing base recalibration. In summary, there were no differences between those samples aligned with BWA, regardless of whether the “‐M” parameter was used. Removing duplicates resolved one no‐call issue by Cyrius and yielded the correct diplotype. However, compared with merely removing duplicates, the additional step of base recalibration did not provide any benefit, instead, it led to an additional incorrect call by StellarPGx and provided a different (incorrect) diplotype for one sample already miscalled by Aldy. Since Stargazer relies on the provided VCF file, we also performed filtering on the VCFs based on allelic balance, and separately, quality scores. While the former was able to solve some of the rare haplotypes (*122), it made additional incorrect results in other samples and the overall concordance decreased to 81.8% (with a 94.3% call rate). For the latter, no improvement in the whole dataset was observed, resulting in a slightly lower, 81.4% concordance for CYP2D6 . Alignment on GRCh37 with BWA‐MEM Since post‐processing had negligible effect on the CYP2D6 results and considering some of the results which were incorrect in our study, were genotyped correctly in another one using GRCh37 reference genome, we decided to investigate whether tools perform differently on samples aligned on the older assembly. For this, all samples were aligned on GRCh37 and tools were run using the same methodology, with parameters adjusted for the different reference. Nearly identical results were seen for all other genes except for CYP2D6 (Table ). Notably, for CYP2D6 , using the GRCh37 reference genome corrected one result for Cyrius and also provided accurate results for two samples for which it made no calls on GRCh38. For StellarPGx, all four incorrect calls on GRCh38 were correct on GRCh37. For Aldy, one incorrect call (NA07055; *17/*122) was corrected to *1/*17, and for Stargazer, a total of four calls were corrected (involving three cases where *122 was called erroneously instead of *1 on GRCh38). However, while resolving those issues, the tools made incorrect calls on GRCh37‐aligned samples that were correct on the newer reference. Compared with the GRCh38 results, Aldy maintained an identical concordance rate of 88.6%, whereas Stargazer and StellarPGx showed lower performance in the GRCh37 dataset, reaching 70.0% and 90.0% concordance, respectively. Only Cyrius did not make any additional incorrect calls, therefore achieving a higher, 98.6% concordance. Several incorrect calls on GRCh37 involved reporting rare alleles, such as *131 or *139 instead of *1 as observed for Aldy, Stargazer, and StellarPGx, where the *139 was especially frequent in *1/*4 diplotypes (6 out of 7 cases). Alignment on GRCh38 with Bowtie2 Due to several incorrect results in the GRCh38 and GRCh37 datasets, and in light of other studies that successfully identified correct diplotypes on the same samples, but which used pre‐aligned sequencing files, , we aimed to determine the effect of the aligner on the downstream process. Specifically, we determined the performance of tools on a dataset aligned on the GRCh38 assembly with Bowtie 2 (Table ) and compared the results with samples aligned by using BWA (Figure ). Interestingly, Bowtie2 alignments resolved all incorrect *122 haplotype assignments, provided two calls for Cyrius that were not made with BWA‐aligned GRCh38 datasets and corrected one StellarPGx call. On the other hand, Bowtie2 alignments also resulted in some incorrect calls in other samples. Compared with BWA alignments, a more noticeable drop in concordance was observed for Stargazer and StellarPGx, while it remained nearly unchanged for Aldy and Cyrius. Performance differences in CYP2D6 based on variant types The samples were categorized according to whether they contained structural variations (SVs) in the CYP2D6 gene and the performance of the tools was assessed separately on each subset. In the dataset, 46 samples did not contain SVs, while 23 did include SVs (the NA18540 sample was omitted due to the uncertainty of the presence of structural variations). Of the samples with SVs, 10 had at least one haplotype with a duplication, eight had a deletion and seven had a fusion. All tools performed best on samples without SVs (Table ). Cyrius achieved 100% concordance in all datasets, followed by Aldy with ~95.7% and StellarPGx with 97.8% on the BWA‐aligned GRCh38 dataset and below 90% for others. Similarly, Stargazer had the highest concordance (87%) also in the BWA‐aligned GRCh38 dataset and lower (below 80%) in others (Figure ). On samples with structural variants (Figure ), both Cyrius and StellarPGx performed similarly well, however, with lower concordance than for samples without SVs (90.5%–95.5% for the former and 87%–91.3% for the latter). Stargazer performed better than Aldy on GRCh38 samples, with concordance around 82.6% for the former and 78.3% for the latter. Impact of sequencing depth on results No studies have compared PGx tools on lower depths and therefore, we assessed their performance on the GRCh38 BWA‐aligned dataset again, but with downsampling aligned sequencing files to reach mean coverage depths of 30×, 20×, 10×, and 5×. We also downsampled to 1× but did not include the results, as the tools failed to make calls most of the time. Figure illustrates the results obtained by all tools and the consensus approaches. In assessing tool performance across various depths, some trends were noted. For CYP2C9 , CYP2C19 , CYP3A5 , and TPMT tools showed high concordance even at low (10×) coverage, with a slight decline at 5×. CYP2B6 's concordance decreased more steadily with reduced depth, maintaining over 80% concordance at higher depths but falling to around 60% at 5×. CYP2D6 analysis showed a marked decrease in concordance across all tools at lower depths. Notably, Cyrius maintained very high accuracy even at 10× and 5×, but with a low call rate, 10% (seven samples) at 10× and 2.9% (two samples) at 5×. In most cases, the 2‐tool consensus resulted in the same or higher concordance than the best‐performing tool (except for Cyrius at lower depths), and even better results were seen for the three‐tool consensus, but with the cost of lower call rate. Consensus call benefits were more pronounced below 20× depths. Since for some genes, haplotypes other than the reference (*1) are infrequent in the population (for example, TPMT ), the data were analyzed again after removing all samples with wild‐type diplotypes (*1/*1) to determine the extent that tools may provide the correct result due to their inability to determine variants. When comparing a dataset containing all samples with a dataset excluding wild‐types, minimal differences in concordance were observed down to the 20× depth. However, at the 5× depth, disparities became more pronounced, particularly for CYP2B6 , CYP2C19 , and especially for CYP2C9 and TPMT (Figure – semi‐transparent dotted lines; separately Figure ). The differences in concordance between the original dataset and a subset composed solely of non‐wild‐type samples were computed for each depth, followed by the calculation of Pearson's correlation between the number of wild‐type samples and the difference in concordance (across all tools). Only for 5× depth, a significant moderate negative correlation ( r = −0.571, p = 0.01) was observed, suggesting that an increased number of wild‐type diplotypes is associated with decreased concordance at 5× sequencing depth. In other words, the high concordance for TPMT and CYP2C9 at 5× in this dataset (~95% and ~88%, respectively) may have been influenced by the high proportion of samples with a wild‐type diplotype, while the concordance for non‐wild‐types is around 40%–60% instead. Results from the consensus approach Given that a consensus approach could improve accuracy and reduce false‐positive rate, we separately examined this and used two‐tool and three‐tool consensus models. In general, consensus results were nearly identical to those of other tools for genes with high concordance across datasets ( CYP2C9 , CYP2C19 , CYP3A5 , and TPMT ). For BWA‐aligned GRCh38 samples, the two‐tool consensus achieved slightly higher concordance on CYP2B6 (88.4%) than the best‐performing tool (87.1%), and the three‐tool consensus further increased to 91.8%. However, as a tradeoff, call rates dropped to 98.6% and 87.1% for the two‐tool and three‐tool consensus, respectively. Considering at least a two‐tool or three‐tool consensus for CYP2D6 , concordance increased to over 98%, surpassing the other tools, albeit with a reduced call rate. Additionally, a 4‐tool consensus was tested for CYP2D6 , achieving 100% concordance but reducing the call rate to 75.7%. Results on the BWA‐aligned GRCh37 samples were similar, with Cyrius slightly outperforming the two‐tool consensus and achieving nearly identical concordance with the three‐tool model. Finally, the results of the Bowtie2‐aligned samples on the GRCh38 assembly showed only minor differences from Cyrius, yet markedly better results than by any other tool. While the consensus approach did not surpass Cyrius in CYP2D6 samples without structural variants, in samples with SVs the consensus approaches outperformed Cyrius in all datasets by 4%–5% (except for the GRCh37 dataset where the two‐tool consensus was nearly identical). Alignment on GRCh38 with BWA‐MEM First, all WGS samples were aligned on the GRCh38 reference assembly by using the BWA‐MEM algorithm. The mean depth across the genome was determined as 39.7, with a standard deviation of 2.73 (median: 40×). The ground truth diplotypes were compared with the calls from individual tools, as well as with consensus results obtained from combinations of two and three tools. Rarely, two possible solutions were provided: once by Cyrius for CYP2D6 and five times by StellarPGx for CYP2B6 . For the latter, the first solution matched the ground truth in three instances, while in two instances, neither solution matched. Stargazer often provided a list of other possible haplotypes, sometimes including a lengthy list of up to 10 items. As presented in Table , Aldy, StellarPGx, and Stargazer demonstrated strong performance in genotyping CYP2C19 , CYP2C9 , CYP3A5 , and TPMT , incorrectly identifying only a maximum of one sample. For CYP2B6 , concordance rates were lower and were also similar across the tools, varying between 85.7% and 87.1%. Focusing on the CYP2D6 gene, greater variability was observed between the different tools. Specifically, Cyrius incorrectly genotyped 2 samples and failed to provide results for 3 others. StellarPGx, Aldy, and Stargazer made incorrect calls on 4, 6, and 11 samples, respectively. All tools were wrongly called CYP2D6 in NA18565, while only Cyrius was able to correctly determine all haplotypes, albeit with incorrect phasing. Additionally, samples NA21781 and NA18540 were incorrectly genotyped by three tools, but were correctly identified by Stargazer and Cyrius, respectively. Meanwhile, the remaining 10 samples with wrong calls were identified incorrectly by either one or two tools. Notably, Stargazer exhibited more genotyping inaccuracies compared with the ground truth across various samples, including reporting of a rare *122 haplotype for four samples, instead of the actual *1. One of these samples was also misidentified by Aldy as *122. The alignment process was conducted again for all 15 samples with incorrect CYP2D6 diplotypes (or those with no call) from any tool to eliminate the possibility of incomplete alignments. We also experimented with aligning those samples using BWA‐MEM with and without the “‐M” parameter (either a split read is flagged as duplicate or as supplementary alignment, respectively). Separately, we applied post‐processing by marking and removing duplicate reads as well as performing base recalibration. In summary, there were no differences between those samples aligned with BWA, regardless of whether the “‐M” parameter was used. Removing duplicates resolved one no‐call issue by Cyrius and yielded the correct diplotype. However, compared with merely removing duplicates, the additional step of base recalibration did not provide any benefit, instead, it led to an additional incorrect call by StellarPGx and provided a different (incorrect) diplotype for one sample already miscalled by Aldy. Since Stargazer relies on the provided VCF file, we also performed filtering on the VCFs based on allelic balance, and separately, quality scores. While the former was able to solve some of the rare haplotypes (*122), it made additional incorrect results in other samples and the overall concordance decreased to 81.8% (with a 94.3% call rate). For the latter, no improvement in the whole dataset was observed, resulting in a slightly lower, 81.4% concordance for CYP2D6 . Alignment on GRCh37 with BWA‐MEM Since post‐processing had negligible effect on the CYP2D6 results and considering some of the results which were incorrect in our study, were genotyped correctly in another one using GRCh37 reference genome, we decided to investigate whether tools perform differently on samples aligned on the older assembly. For this, all samples were aligned on GRCh37 and tools were run using the same methodology, with parameters adjusted for the different reference. Nearly identical results were seen for all other genes except for CYP2D6 (Table ). Notably, for CYP2D6 , using the GRCh37 reference genome corrected one result for Cyrius and also provided accurate results for two samples for which it made no calls on GRCh38. For StellarPGx, all four incorrect calls on GRCh38 were correct on GRCh37. For Aldy, one incorrect call (NA07055; *17/*122) was corrected to *1/*17, and for Stargazer, a total of four calls were corrected (involving three cases where *122 was called erroneously instead of *1 on GRCh38). However, while resolving those issues, the tools made incorrect calls on GRCh37‐aligned samples that were correct on the newer reference. Compared with the GRCh38 results, Aldy maintained an identical concordance rate of 88.6%, whereas Stargazer and StellarPGx showed lower performance in the GRCh37 dataset, reaching 70.0% and 90.0% concordance, respectively. Only Cyrius did not make any additional incorrect calls, therefore achieving a higher, 98.6% concordance. Several incorrect calls on GRCh37 involved reporting rare alleles, such as *131 or *139 instead of *1 as observed for Aldy, Stargazer, and StellarPGx, where the *139 was especially frequent in *1/*4 diplotypes (6 out of 7 cases). Alignment on GRCh38 with Bowtie2 Due to several incorrect results in the GRCh38 and GRCh37 datasets, and in light of other studies that successfully identified correct diplotypes on the same samples, but which used pre‐aligned sequencing files, , we aimed to determine the effect of the aligner on the downstream process. Specifically, we determined the performance of tools on a dataset aligned on the GRCh38 assembly with Bowtie 2 (Table ) and compared the results with samples aligned by using BWA (Figure ). Interestingly, Bowtie2 alignments resolved all incorrect *122 haplotype assignments, provided two calls for Cyrius that were not made with BWA‐aligned GRCh38 datasets and corrected one StellarPGx call. On the other hand, Bowtie2 alignments also resulted in some incorrect calls in other samples. Compared with BWA alignments, a more noticeable drop in concordance was observed for Stargazer and StellarPGx, while it remained nearly unchanged for Aldy and Cyrius. Performance differences in CYP2D6 based on variant types The samples were categorized according to whether they contained structural variations (SVs) in the CYP2D6 gene and the performance of the tools was assessed separately on each subset. In the dataset, 46 samples did not contain SVs, while 23 did include SVs (the NA18540 sample was omitted due to the uncertainty of the presence of structural variations). Of the samples with SVs, 10 had at least one haplotype with a duplication, eight had a deletion and seven had a fusion. All tools performed best on samples without SVs (Table ). Cyrius achieved 100% concordance in all datasets, followed by Aldy with ~95.7% and StellarPGx with 97.8% on the BWA‐aligned GRCh38 dataset and below 90% for others. Similarly, Stargazer had the highest concordance (87%) also in the BWA‐aligned GRCh38 dataset and lower (below 80%) in others (Figure ). On samples with structural variants (Figure ), both Cyrius and StellarPGx performed similarly well, however, with lower concordance than for samples without SVs (90.5%–95.5% for the former and 87%–91.3% for the latter). Stargazer performed better than Aldy on GRCh38 samples, with concordance around 82.6% for the former and 78.3% for the latter. First, all WGS samples were aligned on the GRCh38 reference assembly by using the BWA‐MEM algorithm. The mean depth across the genome was determined as 39.7, with a standard deviation of 2.73 (median: 40×). The ground truth diplotypes were compared with the calls from individual tools, as well as with consensus results obtained from combinations of two and three tools. Rarely, two possible solutions were provided: once by Cyrius for CYP2D6 and five times by StellarPGx for CYP2B6 . For the latter, the first solution matched the ground truth in three instances, while in two instances, neither solution matched. Stargazer often provided a list of other possible haplotypes, sometimes including a lengthy list of up to 10 items. As presented in Table , Aldy, StellarPGx, and Stargazer demonstrated strong performance in genotyping CYP2C19 , CYP2C9 , CYP3A5 , and TPMT , incorrectly identifying only a maximum of one sample. For CYP2B6 , concordance rates were lower and were also similar across the tools, varying between 85.7% and 87.1%. Focusing on the CYP2D6 gene, greater variability was observed between the different tools. Specifically, Cyrius incorrectly genotyped 2 samples and failed to provide results for 3 others. StellarPGx, Aldy, and Stargazer made incorrect calls on 4, 6, and 11 samples, respectively. All tools were wrongly called CYP2D6 in NA18565, while only Cyrius was able to correctly determine all haplotypes, albeit with incorrect phasing. Additionally, samples NA21781 and NA18540 were incorrectly genotyped by three tools, but were correctly identified by Stargazer and Cyrius, respectively. Meanwhile, the remaining 10 samples with wrong calls were identified incorrectly by either one or two tools. Notably, Stargazer exhibited more genotyping inaccuracies compared with the ground truth across various samples, including reporting of a rare *122 haplotype for four samples, instead of the actual *1. One of these samples was also misidentified by Aldy as *122. The alignment process was conducted again for all 15 samples with incorrect CYP2D6 diplotypes (or those with no call) from any tool to eliminate the possibility of incomplete alignments. We also experimented with aligning those samples using BWA‐MEM with and without the “‐M” parameter (either a split read is flagged as duplicate or as supplementary alignment, respectively). Separately, we applied post‐processing by marking and removing duplicate reads as well as performing base recalibration. In summary, there were no differences between those samples aligned with BWA, regardless of whether the “‐M” parameter was used. Removing duplicates resolved one no‐call issue by Cyrius and yielded the correct diplotype. However, compared with merely removing duplicates, the additional step of base recalibration did not provide any benefit, instead, it led to an additional incorrect call by StellarPGx and provided a different (incorrect) diplotype for one sample already miscalled by Aldy. Since Stargazer relies on the provided VCF file, we also performed filtering on the VCFs based on allelic balance, and separately, quality scores. While the former was able to solve some of the rare haplotypes (*122), it made additional incorrect results in other samples and the overall concordance decreased to 81.8% (with a 94.3% call rate). For the latter, no improvement in the whole dataset was observed, resulting in a slightly lower, 81.4% concordance for CYP2D6 . Since post‐processing had negligible effect on the CYP2D6 results and considering some of the results which were incorrect in our study, were genotyped correctly in another one using GRCh37 reference genome, we decided to investigate whether tools perform differently on samples aligned on the older assembly. For this, all samples were aligned on GRCh37 and tools were run using the same methodology, with parameters adjusted for the different reference. Nearly identical results were seen for all other genes except for CYP2D6 (Table ). Notably, for CYP2D6 , using the GRCh37 reference genome corrected one result for Cyrius and also provided accurate results for two samples for which it made no calls on GRCh38. For StellarPGx, all four incorrect calls on GRCh38 were correct on GRCh37. For Aldy, one incorrect call (NA07055; *17/*122) was corrected to *1/*17, and for Stargazer, a total of four calls were corrected (involving three cases where *122 was called erroneously instead of *1 on GRCh38). However, while resolving those issues, the tools made incorrect calls on GRCh37‐aligned samples that were correct on the newer reference. Compared with the GRCh38 results, Aldy maintained an identical concordance rate of 88.6%, whereas Stargazer and StellarPGx showed lower performance in the GRCh37 dataset, reaching 70.0% and 90.0% concordance, respectively. Only Cyrius did not make any additional incorrect calls, therefore achieving a higher, 98.6% concordance. Several incorrect calls on GRCh37 involved reporting rare alleles, such as *131 or *139 instead of *1 as observed for Aldy, Stargazer, and StellarPGx, where the *139 was especially frequent in *1/*4 diplotypes (6 out of 7 cases). GRCh38 with Bowtie2 Due to several incorrect results in the GRCh38 and GRCh37 datasets, and in light of other studies that successfully identified correct diplotypes on the same samples, but which used pre‐aligned sequencing files, , we aimed to determine the effect of the aligner on the downstream process. Specifically, we determined the performance of tools on a dataset aligned on the GRCh38 assembly with Bowtie 2 (Table ) and compared the results with samples aligned by using BWA (Figure ). Interestingly, Bowtie2 alignments resolved all incorrect *122 haplotype assignments, provided two calls for Cyrius that were not made with BWA‐aligned GRCh38 datasets and corrected one StellarPGx call. On the other hand, Bowtie2 alignments also resulted in some incorrect calls in other samples. Compared with BWA alignments, a more noticeable drop in concordance was observed for Stargazer and StellarPGx, while it remained nearly unchanged for Aldy and Cyrius. CYP2D6 based on variant types The samples were categorized according to whether they contained structural variations (SVs) in the CYP2D6 gene and the performance of the tools was assessed separately on each subset. In the dataset, 46 samples did not contain SVs, while 23 did include SVs (the NA18540 sample was omitted due to the uncertainty of the presence of structural variations). Of the samples with SVs, 10 had at least one haplotype with a duplication, eight had a deletion and seven had a fusion. All tools performed best on samples without SVs (Table ). Cyrius achieved 100% concordance in all datasets, followed by Aldy with ~95.7% and StellarPGx with 97.8% on the BWA‐aligned GRCh38 dataset and below 90% for others. Similarly, Stargazer had the highest concordance (87%) also in the BWA‐aligned GRCh38 dataset and lower (below 80%) in others (Figure ). On samples with structural variants (Figure ), both Cyrius and StellarPGx performed similarly well, however, with lower concordance than for samples without SVs (90.5%–95.5% for the former and 87%–91.3% for the latter). Stargazer performed better than Aldy on GRCh38 samples, with concordance around 82.6% for the former and 78.3% for the latter. No studies have compared PGx tools on lower depths and therefore, we assessed their performance on the GRCh38 BWA‐aligned dataset again, but with downsampling aligned sequencing files to reach mean coverage depths of 30×, 20×, 10×, and 5×. We also downsampled to 1× but did not include the results, as the tools failed to make calls most of the time. Figure illustrates the results obtained by all tools and the consensus approaches. In assessing tool performance across various depths, some trends were noted. For CYP2C9 , CYP2C19 , CYP3A5 , and TPMT tools showed high concordance even at low (10×) coverage, with a slight decline at 5×. CYP2B6 's concordance decreased more steadily with reduced depth, maintaining over 80% concordance at higher depths but falling to around 60% at 5×. CYP2D6 analysis showed a marked decrease in concordance across all tools at lower depths. Notably, Cyrius maintained very high accuracy even at 10× and 5×, but with a low call rate, 10% (seven samples) at 10× and 2.9% (two samples) at 5×. In most cases, the 2‐tool consensus resulted in the same or higher concordance than the best‐performing tool (except for Cyrius at lower depths), and even better results were seen for the three‐tool consensus, but with the cost of lower call rate. Consensus call benefits were more pronounced below 20× depths. Since for some genes, haplotypes other than the reference (*1) are infrequent in the population (for example, TPMT ), the data were analyzed again after removing all samples with wild‐type diplotypes (*1/*1) to determine the extent that tools may provide the correct result due to their inability to determine variants. When comparing a dataset containing all samples with a dataset excluding wild‐types, minimal differences in concordance were observed down to the 20× depth. However, at the 5× depth, disparities became more pronounced, particularly for CYP2B6 , CYP2C19 , and especially for CYP2C9 and TPMT (Figure – semi‐transparent dotted lines; separately Figure ). The differences in concordance between the original dataset and a subset composed solely of non‐wild‐type samples were computed for each depth, followed by the calculation of Pearson's correlation between the number of wild‐type samples and the difference in concordance (across all tools). Only for 5× depth, a significant moderate negative correlation ( r = −0.571, p = 0.01) was observed, suggesting that an increased number of wild‐type diplotypes is associated with decreased concordance at 5× sequencing depth. In other words, the high concordance for TPMT and CYP2C9 at 5× in this dataset (~95% and ~88%, respectively) may have been influenced by the high proportion of samples with a wild‐type diplotype, while the concordance for non‐wild‐types is around 40%–60% instead. Given that a consensus approach could improve accuracy and reduce false‐positive rate, we separately examined this and used two‐tool and three‐tool consensus models. In general, consensus results were nearly identical to those of other tools for genes with high concordance across datasets ( CYP2C9 , CYP2C19 , CYP3A5 , and TPMT ). For BWA‐aligned GRCh38 samples, the two‐tool consensus achieved slightly higher concordance on CYP2B6 (88.4%) than the best‐performing tool (87.1%), and the three‐tool consensus further increased to 91.8%. However, as a tradeoff, call rates dropped to 98.6% and 87.1% for the two‐tool and three‐tool consensus, respectively. Considering at least a two‐tool or three‐tool consensus for CYP2D6 , concordance increased to over 98%, surpassing the other tools, albeit with a reduced call rate. Additionally, a 4‐tool consensus was tested for CYP2D6 , achieving 100% concordance but reducing the call rate to 75.7%. Results on the BWA‐aligned GRCh37 samples were similar, with Cyrius slightly outperforming the two‐tool consensus and achieving nearly identical concordance with the three‐tool model. Finally, the results of the Bowtie2‐aligned samples on the GRCh38 assembly showed only minor differences from Cyrius, yet markedly better results than by any other tool. While the consensus approach did not surpass Cyrius in CYP2D6 samples without structural variants, in samples with SVs the consensus approaches outperformed Cyrius in all datasets by 4%–5% (except for the GRCh37 dataset where the two‐tool consensus was nearly identical). This independent PGx tool benchmarking study mostly showed small differences among tools for the genes analyzed, except for CYP2D6 , where the differences between tools, reference genomes, and aligners were more notable. Comparing our findings with other benchmarks, which mostly used earlier versions of the tools (except for Cyrius, which has not seen a public update since 2021), we saw similarities but also some differences. For instance, the study by Chen and colleagues, which assessed Cyrius's performance across a larger dataset of 144 samples, reported 99.3% concordance, while our findings indicate very close results, only a percentage or two lower (depending on the dataset). Our results diverge more from those reported by Aldy's developers, mostly for CYP2D6 where a 98.6% concordance for Aldy was reported on the same Illumina WGS dataset, while we found this to be 88.6% (BWA‐aligned). Furthermore, StellarPGx authors reported a 99% concordance for CYP2D6 diplotypes in 109 GeT‐RM WGS samples, which we also found to be lower (94.3%) on GRCh38 and 90% on GRCh37 BWA‐aligned dataset. The differences between concordance may arise from the use of different datasets or, in cases where the same dataset is used, in the method of when a call is considered to be concordant with the truth as well as from variations in the alignment method or any post‐alignment processing steps. It is also possible that the differences arise from using older ground truth with more incorrect truth diplotypes. In this study, we used the most up‐to‐date ground truth data and therefore, we explored the other sources of potential variation. First, we investigated the effect of common post‐processing steps on 15 samples with incorrect calls. Removing duplicates helped to resolve one no‐call made by Cyrius, while base recalibration had rather a minor negative effect and resulted in an additional incorrect call made by StellarPGx. The differences between studies have been more variable for Stargazer, a tool that requires a VCF file as an input, which can be created and processed using various methods. As a result, this may lead to different outcomes even on the same samples. For example, we filtered VCFs based on quality scores and allelic balance and while this approach resolved incorrect calls for some samples, it introduced erroneous calls in others. This indicates that Stargazer is sensitive to the input VCF file and also suggests that preprocessing of VCFs may require further fine‐tuning to achieve optimal results for CYP2D6 when using Stargazer. Another factor in the PGx analysis, as demonstrated in our experiments, is the reference genome. This was illustrated by several corrected diplotypes when calling star alleles on GRCh37‐aligned samples instead of GRCh38. However, we also observed incorrect calls on GRCh37‐aligned samples, indicating that the choice of reference genome can affect the results. Based on our examination of some sample alignments, we believe this may be due to certain regions being more susceptible to misalignment of reads from the homologous CYP2D7 gene's region. For example, in several instances where samples were aligned to GRCh38 with BWA, a *1 was mistakenly called *122, suggesting the presence of the corresponding variant rs61745683. However, the alignments indicate that other reads, likely from the CYP2D7 region, have misaligned to this region, falsely representing the sequence and resulting in the incorrect call (see example in Figure ). When comparing the read alignments of samples aligned to the two reference genomes, misaligned reads in this region are more prevalent with the GRCh38 reference, affecting the calls on these samples. In contrast, GRCh37 seems to be less prone to such misalignments in this region, thereby providing the correct *1 haplotype instead of *122. However, while samples aligned with GRCh38 may be more susceptible to these misalignments, we observed that Bowtie2‐aligned samples may have the same number of misaligned reads. Nonetheless, those reads generally have significantly lower mapping quality, which the tools can account for in their genotyping model (e.g., all *122 alleles were correctly called as *1 in the Bowtie2‐aligned dataset). In our other work, we have observed a similar issue with DRAGEN‐aligned samples as well. The noise generated via such misalignment could be the cause of other incorrect calls as well that were observed in across all dataset (e.g. *139 in GRCh37‐dataset). Interestingly, in our BWA‐aligned GRCh38 dataset, Cyrius initially failed to make calls for three samples, which were correctly called at lower sequencing depths (correct calls were made for samples NA19147 and HG00276 at both 30× and 20× depth, and for NA07055 at 20× depth). The missing calls can be explained by an ambiguous normalized depth value for calling a deletion in one sample and noisy alignments at key variant sites in the other two samples. This ambiguity and noise were reduced when downsampled to a lower depth (X. Chen, personal communication, April 22, 2024). The issue of ambiguous normalized depth values was also resolved after removing duplicates from the aligned file, which was the only positive effect of the post‐processing we observed. In general, Cyrius seems to adopt a more cautious approach, opting not to provide a result rather than risk making an incorrect call. This is well illustrated by the data on lower sequencing depths, where 100% concordance was observed at 10x and 5x, but for only 10% and 2.9% of samples, respectively. Thus, Cyrius may be the preferred choice for genotyping CYP2D6 when prioritizing high accuracy and minimizing false‐positives with a single tool, which is particularly important in clinical settings. With regard to sequencing depth, we observed that tools typically perform well at depths of 20× or higher, with small or no differences compared with higher depths, depending on the gene. Additionally, for some genes such as TPMT and CYP2C9 , while performance at 5× remains around 90% or more, it may be biassed due to the high number of wild‐type alleles. When assessing performance based solely on non‐wild‐type alleles, markedly lower results (around 40%–60%) were observed for those genes. Aldy appeared to be more influenced by depth, as the concordance in CYP2D6 steadily decreased across all depths and was notably more sensitive at 10×. Consensus results outperformed Stargazer, Aldy, and StellarPGx on CYP2D6 , but not always Cyrius itself. Therefore, a consensus approach can be recommended when using the first three tools, but its utility is more debatable when using Cyrius at depths of 20x and higher. In instances when Cyrius is unable to make a call, a consensus call from other tools would be beneficial. For other genes, the concordance of all tools was very close to the consensus approach, making clear recommendations difficult. However, considering that no single tool consistently performed best, using multiple tools and consensus approach might be advisable for the most accurate results. It is important to note that with lower sequencing depths, this approach can also lead to incorrect consensus calls, but using more tools can help minimize this risk. In conclusion, this study demonstrates that PGx tools perform well on the assessed pharmacogenes, even at lower sequencing depths. Based on our analysis, we recommend using sequencing data with at least 20× depth and at lower depths, considering a consensus approach using the best‐performing tools to lower the risk of incorrectly called haplotypes by any single tool. When analyzing CYP2D6 , a consensus approach may be less important if using Cyrius, but it can still be beneficial in avoiding incorrect calls made by a single tool. Limitations We used 70 samples from four superpopulations, but a larger and more diverse dataset could offer a more comprehensive assessment of the tools' performances, particularly with a higher number of haplotypes, including rarer ones. For example, some population groups may have a higher frequency of SVs, which are more challenging to accurately call, and, as determined in this study, this may result in lower performance of the tools. Since there was no consensus among laboratories/studies on the ground truth for some samples (mostly involving rare variants), therefore, due to the inability to ascertain the correct result, all possible diplotype variants were included to determine a true call, which could affect concordance results with the truth. Finally, performance of tools may vary for other datasets containing samples sequenced with different sequencers and using other library preparation methods (such as PCR) or aligning sequencing data with different aligners. This study focused on six selected genes; therefore, the performance on genes such as SLCO1B1 , DPYD , G6PD , and others was not assessed, and tools' performance may vary for genes not involved in this study. We used 70 samples from four superpopulations, but a larger and more diverse dataset could offer a more comprehensive assessment of the tools' performances, particularly with a higher number of haplotypes, including rarer ones. For example, some population groups may have a higher frequency of SVs, which are more challenging to accurately call, and, as determined in this study, this may result in lower performance of the tools. Since there was no consensus among laboratories/studies on the ground truth for some samples (mostly involving rare variants), therefore, due to the inability to ascertain the correct result, all possible diplotype variants were included to determine a true call, which could affect concordance results with the truth. Finally, performance of tools may vary for other datasets containing samples sequenced with different sequencers and using other library preparation methods (such as PCR) or aligning sequencing data with different aligners. This study focused on six selected genes; therefore, the performance on genes such as SLCO1B1 , DPYD , G6PD , and others was not assessed, and tools' performance may vary for genes not involved in this study. A.H., S.L., S.S., C.M. and R.C. wrote the manuscript; A.H. designed and performed the research and analyzed the data with the input from S.L., S.S., C.M. and R.C. This work is supported by a Medical Research Future Fund Genomics Health Future Mission Grant [MRF/2024900 CIA Conyers] supporting AH and CM positions. RC is a recipient of a Murdoch Children's Research Institute Clinician Scientist Fellowship and is an associate investigator with the ReNEW Novo Nordisk Stem Cell Foundation. The authors declared no competing interests for this work. Appendix S1. |
Anatomy education in US Medical Schools: before, during, and beyond COVID-19 | ee95fab9-d73d-4066-9882-d74a3678b52c | 8851737 | Anatomy[mh] | Over the past decade, the landscape of United States (US) medical education has continuously changed following calls to adopt innovative, competency-based curricula to produce physicians better prepared to navigate our complex health care system . Notable changes have included adoption of new technologies, a greater emphasis on team-based learning, enhancement of interprofessional education, and condensation of the preclinical curriculum. In particular, numerous institutions in the US have recently compressed their basic sciences or foundational preclinical curricula from the traditional 24 months to 12 or 18 months . Furthermore, the COVID-19 pandemic has had significant impacted all facets of medical education, requiring physician educators to redesign curricula to be in line with social distancing mandates. For many preclinical courses, the aforementioned changes may have simply entailed reduced formalized didactics, more case-based modules, and a transition to online, recorded lectures. However, such modifications would be more difficult for subjects with a physical laboratory component, such as gross anatomy, which has conventionally relied on an in-person cadaveric dissection as a primary educational tool since the fifteenth century . As opposed to other courses, anatomy requires an appreciation for complex three-dimensional relationships and is often one of first pre-clinical courses during which correlates to clinical medicine can begin to be illustrated. As such, a direct approach to “hands-on” anatomy education is perceived by some to be an indispensable component for subject mastery . Previous studies have reported on the steady rate of modifications to US medical school anatomy education over the past two decades [ – ]. Such changes have included decreased total course time, decreased dissection time, and integration of anatomy education into other courses. However, no prior studies have included responses from more than 50% of medical schools in the US. Additionally, there have been no reports on the effects of the COVID-19 pandemic on US anatomy education or the future direction of the discipline’s pedagogy in the face of prolonged social distancing mandates. Therefore, we surveyed US medical schools to assess recent trends in anatomy education, the impact of the COVID-19 pandemic on anatomy teaching, and future anticipated directions of anatomy curricula.
Survey distribution All allopathic schools that were participating members of the Association of American Medical Colleges (AAMC) were identified. E-mail addresses for each school’s anatomy course director(s) were identified by searching faculty websites, Google search, or directly contacting the school’s medical education office. If multiple course directors were listed, e-mail addresses for all directors were included in the initial outreach. All collected addresses were then e-mailed a 29-item survey (Additional file ) asking questions about their school’s gross anatomy curricula. Open-ended response questions also provided an opportunity to discuss the most recent changes to the school’s anatomy curriculum as well as any anticipated future changes. If no response was provided within a week, anatomy professors at each institution were individually e-mailed for follow-up. This was repeated three times for a total of four follow-up attempts (Fig. ). Survey components The distributed survey (Additional file ) consisted of objective and subjective questions about each school’s gross anatomy curriculum. The first portion of the survey asked multiple-choice questions specific to each school’s gross anatomy curriculum before and during COVID-19. These included questions regarding course structure, teaching modalities, practical or “hands-on” learning (e.g. cadaver dissection/prosection, 3D/VR software, small group learning, etc.), use of supplemental material, and grading schemata. Respondents were also asked about their opinions of the effect of COVID-19 on the quality of their students’ anatomy education using a 5-point Likert scale. The last portion of the survey asked open-ended questions about curricula weaknesses, recent major curricula changes, and any anticipated future changes. Subjective responses were categorized into groups for analysis, as agreed upon by 2 authors (MS, AP). Statistical methods Parametric and nonparametric continuous variables were summarized using mean and standard deviation or median and quartiles. Differences in parametric continuous variables between pre-COVID-19 and COVID-19 periods were assessed using Student’s T-Test. Non-parametric differences were assessed using the Wilcoxon Rank-Sum test. Chi-square analysis and Fisher’s Exact Test were used to assess the association between categorical variables. A two-sided Type I error rate of 0.05 was used to indicate statistical significance. All calculations were performed using STATA 14.2 (STATA Corp, College Station TX, USA).
All allopathic schools that were participating members of the Association of American Medical Colleges (AAMC) were identified. E-mail addresses for each school’s anatomy course director(s) were identified by searching faculty websites, Google search, or directly contacting the school’s medical education office. If multiple course directors were listed, e-mail addresses for all directors were included in the initial outreach. All collected addresses were then e-mailed a 29-item survey (Additional file ) asking questions about their school’s gross anatomy curricula. Open-ended response questions also provided an opportunity to discuss the most recent changes to the school’s anatomy curriculum as well as any anticipated future changes. If no response was provided within a week, anatomy professors at each institution were individually e-mailed for follow-up. This was repeated three times for a total of four follow-up attempts (Fig. ).
The distributed survey (Additional file ) consisted of objective and subjective questions about each school’s gross anatomy curriculum. The first portion of the survey asked multiple-choice questions specific to each school’s gross anatomy curriculum before and during COVID-19. These included questions regarding course structure, teaching modalities, practical or “hands-on” learning (e.g. cadaver dissection/prosection, 3D/VR software, small group learning, etc.), use of supplemental material, and grading schemata. Respondents were also asked about their opinions of the effect of COVID-19 on the quality of their students’ anatomy education using a 5-point Likert scale. The last portion of the survey asked open-ended questions about curricula weaknesses, recent major curricula changes, and any anticipated future changes. Subjective responses were categorized into groups for analysis, as agreed upon by 2 authors (MS, AP).
Parametric and nonparametric continuous variables were summarized using mean and standard deviation or median and quartiles. Differences in parametric continuous variables between pre-COVID-19 and COVID-19 periods were assessed using Student’s T-Test. Non-parametric differences were assessed using the Wilcoxon Rank-Sum test. Chi-square analysis and Fisher’s Exact Test were used to assess the association between categorical variables. A two-sided Type I error rate of 0.05 was used to indicate statistical significance. All calculations were performed using STATA 14.2 (STATA Corp, College Station TX, USA).
Surveys were sent to one or more course directors or anatomy professors at 143 of 145 AAMC (98.6%) allopathic medical schools. Contact information was not available for the remaining two schools. A total of 117 (81.8%) responses were recorded. Among those that responded, 60 (51.3%) institutions taught gross anatomy within organ-systems blocks, while 54 (46%) taught anatomy as its own course or within a pre-organ system block (Table ). Changes to anatomy curricula prior to COVID-19 Prior to COVID -19, the majority ( n = 94; 80.3%) of institutions delivered didactics through live and recorded lectures. Nineteen (16.2%) institutions implemented a “flipped-classroom” approach to didactic learning. Cadaveric dissection ( n = 106; 90.6%) was the most popular form of “hands-on” interactive learning, with an average of 5.1 ± 1.41 students assigned to each cadaver. Thirteen schools (11.1%) reported the use of novel virtual software (e.g. Holo-Lens, 3D virtual reality software, etc.) as a primary means of interactive learning, although 75 (64.1%) of schools provided anatomy applications to their students as a supplemental resource. Most schools ( n = 65; 57%) reported a major change to their anatomy course within the past five years prior to COVID-19 (Table ). Decreased total course time (19.7%), integration into other courses (18.8%), and implementation of flipped classroom in lieu of previous didactics (14.5%) were the most frequently reported changes. Among those course directors who reported a weakness of their course, answers centered around insufficient dissection time (23.1%) and total course time (15.4%) were most common. Effect of COVID-19 on anatomy curricula During COVID-19, online cadaveric prosection (students are provided with images of a cadaver which was previously dissected by an experienced anatomist) was the most common means of interactive learning ( n = 50; 42.7%), and 28 (23.9%) schools reporting switching from cadaver dissection to prosection (Table ). The majority of course directors ( n = 78; 68.4%) indicated intentions to revert back to their pre-COVID curriculum structure following easing of pandemic related social distancing mandates. We found that COVID-19 has led to a significant decrease ( p < 0.01) in both the weekly hours and the fraction of the course devoted to “hands-on” interactive learning (Table A). Due to COVID-19, the majority of schools ( n = 62; 53.5%) used a Pass/Fail rubric with no internal relative performance ranking. Moreover, there was a significant decrease in the teaching of clinical correlates in anatomy courses ( n = 100 [86%] vs n = 116 [99%]; p = 0.02) and imaging ( n = 97 [83%] vs n = 109 [93.2]; p < 0.01). When course directors were asked to compare students’ performances on assessments during COVID-19 to those of previous years (Table B), the most common response was ‘The Same’ ( n = 63; 53.9%). However, when they were asked about their opinion of the effect of COVID-19 on the quality of anatomy education, ninety-two respondents (78.6%) reported ‘Slight’ or ‘Significant Negative Impacts’. Among those reporting negative effects, ‘Less time devoted to interactive learning’ (62.4%), ‘Less time learning in-person’ (62.4%), anxiety (59.0%), and ‘Lack of Dissection’ (56%) were the most cited justifications. Anticipated changes to anatomy structure & curriculum Lastly, answers pertaining to the incorporation of virtual-reality software or novel 3D learning platforms (23.1%) and reducing time spent on cadaver dissection (12.8%) were the most commonly reported anticipated future changes among institutions planning to institute a change.
Prior to COVID -19, the majority ( n = 94; 80.3%) of institutions delivered didactics through live and recorded lectures. Nineteen (16.2%) institutions implemented a “flipped-classroom” approach to didactic learning. Cadaveric dissection ( n = 106; 90.6%) was the most popular form of “hands-on” interactive learning, with an average of 5.1 ± 1.41 students assigned to each cadaver. Thirteen schools (11.1%) reported the use of novel virtual software (e.g. Holo-Lens, 3D virtual reality software, etc.) as a primary means of interactive learning, although 75 (64.1%) of schools provided anatomy applications to their students as a supplemental resource. Most schools ( n = 65; 57%) reported a major change to their anatomy course within the past five years prior to COVID-19 (Table ). Decreased total course time (19.7%), integration into other courses (18.8%), and implementation of flipped classroom in lieu of previous didactics (14.5%) were the most frequently reported changes. Among those course directors who reported a weakness of their course, answers centered around insufficient dissection time (23.1%) and total course time (15.4%) were most common.
During COVID-19, online cadaveric prosection (students are provided with images of a cadaver which was previously dissected by an experienced anatomist) was the most common means of interactive learning ( n = 50; 42.7%), and 28 (23.9%) schools reporting switching from cadaver dissection to prosection (Table ). The majority of course directors ( n = 78; 68.4%) indicated intentions to revert back to their pre-COVID curriculum structure following easing of pandemic related social distancing mandates. We found that COVID-19 has led to a significant decrease ( p < 0.01) in both the weekly hours and the fraction of the course devoted to “hands-on” interactive learning (Table A). Due to COVID-19, the majority of schools ( n = 62; 53.5%) used a Pass/Fail rubric with no internal relative performance ranking. Moreover, there was a significant decrease in the teaching of clinical correlates in anatomy courses ( n = 100 [86%] vs n = 116 [99%]; p = 0.02) and imaging ( n = 97 [83%] vs n = 109 [93.2]; p < 0.01). When course directors were asked to compare students’ performances on assessments during COVID-19 to those of previous years (Table B), the most common response was ‘The Same’ ( n = 63; 53.9%). However, when they were asked about their opinion of the effect of COVID-19 on the quality of anatomy education, ninety-two respondents (78.6%) reported ‘Slight’ or ‘Significant Negative Impacts’. Among those reporting negative effects, ‘Less time devoted to interactive learning’ (62.4%), ‘Less time learning in-person’ (62.4%), anxiety (59.0%), and ‘Lack of Dissection’ (56%) were the most cited justifications.
Lastly, answers pertaining to the incorporation of virtual-reality software or novel 3D learning platforms (23.1%) and reducing time spent on cadaver dissection (12.8%) were the most commonly reported anticipated future changes among institutions planning to institute a change.
To our knowledge, this is the first study to describe the current state and future of medical school gross anatomy education with over 80% course director participation. It is also the first study to objectively and subjectively analyze the impact of COVID-19 and how this impact fits within recent trends in US medical school anatomy education. While we found a continuation of general educational trends described by previous authors , we also report on recent changes in didactic approaches and novel future directions for anatomy education, potentially catalyzed by social distancing mandates imposed by COVID-19. In accordance with prior work, we found that a growing number of institutions have integrated anatomy education into organ-system blocks. Cadaveric dissection remained the most popular mode of interactive learning among course directors, and our study found the proportion of schools using dissection (90%) prior to COVID-19 (2018–2019) to be similar to that reported by a similar study assessing the 2016–2017 year . We also found that a majority of medical schools provided some form of supplemental external online resource, including phone or table applications, for their students to use as a supplement to traditional lectures and coursework. Interestingly, our survey results indicate that some schools do not utilize practical learning as a form of formalized assessment. We found that 78.4% make use of in-person practical exams, while 13.6% use virtual practical exams. By extension, this implies that a minimum of 8% of schools do not use any form of practical evaluation of knowledge, despite previous literature assessing its efficacy as a summative assessment tool . A small proportion of institutions also incorporate standardized patients into their student performance assessments, which may be of particular use in developing students’ competencies beyond the application of anatomy knowledge. Our study also sought to examine recent major changes to US anatomy curricula prior to COVID-19. In addition to a compression of course hours and integration of anatomy into other courses – which have been previously reported on – we found that many institutions have recently adopted, or plan to adopt, a ‘flipped classroom’ approach to learning, wherein students independently gain an understanding of material, allowing greater class time to be devoted to application and discussion . A recent meta-analysis examining the flipped classroom approach in healthcare professional education courses, including anatomy, concluded that flipped-classroom approaches to learning were preferred by students and resulted in increased learning performance . The authors attributed these findings to increased temporal flexibility in synthesizing material and—importantly—to an increase in the amount of active learning afforded by the lecture time saved. Flipped classroom teaching modalities may be especially pertinent for anatomy education, given our study’s findings indicate that the most common weakness of anatomy curriculum as reported by anatomy directors is insufficient dissection time, which may be considered a form of active learning. Furthermore, a lack of time devoted to practical and in-person learning were the most cited reasons for the pandemic’s negative impact on anatomy education. These findings are logical, as anatomy requires an understanding of three-dimensional relationships that may be appreciated through cadaveric dissection but may be difficult to capture through two-dimensional media, such as lecture slides or textbooks. Utilization of a flipped classroom approach may be a prudent future direction for anatomy education as it will allow educators to maximize formalized curriculum time spent on interactive or in-person learning. The COVID-19 pandemic has dramatically affected the landscape of medical education . While it was admittedly commonplace for students to forego in-person preclinical lectures prior to the pandemic , the loss of those aspects of medical education that require collaboration and physical presence have and will continue to detract from the learning experience and student engagement. Furthermore, beyond being tasked with revamping an entire curriculum seemingly overnight, medical educators, often physicians themselves, have the added responsibility of remaining at the frontline of patient care during the pandemic. Thus, in assessing the effects of COVID-19 on anatomy education, we were unsurprised to find that a majority of anatomy course directors found the COVID-19 pandemic to have a slight or significant negative impact on the quality of learning due to a reduction in practical and in-person learning. Specifically, social distancing mandates tended to lead to an increase in the fraction of course time devoted to lecture, with a corresponding decrease in the amount of active learning time. Interestingly, however, most course directors indicated that student performance on assessments did not change. This can likely be explained in part due to changes in how student assessments were conducted during COVID-19. Prior to COVID-19, 78% of schools reported the use of in-person practical exams as part of their assessment. In contrast, during COVID-19, 25% of course directors reported a completely virtual curricula this year. The lack of an in-person cadaveric practical exam may in part explain these findings, as students may not have needed to demonstrate a proficiency in three-dimensional relationships of the body, but rather memorize images that appeared on virtual assessments. These findings highlight the importance of interactive and practical application-based education in learning complex relational subjects such as anatomy. While the majority of surveyed institutions intended to return to their pre-COVID-19 course curriculum following the pandemic, 16% indicated otherwise, potentially reflecting permanent adoption of new educational tools developed or acquired as a result of the pandemic. Interestingly, we found a significant decrease in the number of schools that taught clinical anatomy correlates and radiology during this period, which have previously been linked to significant enrichment in student knowledge . These findings could arise from a few possible explanations: the sudden-onset nature of the pandemic amidst the school-year forced educators to immediately transition entire courses to an online-format, which may have led to holes in curricula. Physician-educators who teach clinical correlates and imaging may have found themselves burdened with new or additional responsibilities during this time. Additionally, there has been significant incorporation of ultrasound teaching during anatomy courses in previous years . Thus, though one would expect a transition to online learning to have no effect on radiological teaching, a decrease in ultrasound pedagogy, owing to its traditional in-person setting, could explain these findings. Looking ahead at anticipated future changes to US anatomy education, it appears there will be a growing movement away from time dedicated to dissection as well as an embracement of virtual-reality software. In this light, the COVID-19 pandemic has further highlighted the need to leverage modern technologies to improve efficiency in anatomy education . While decreases in dedicated cadaver dissection time has been a well-recognized trend in recent years , we found that 23% of institutions planned on incorporating virtual software/mixed-reality learning into their pedagogical armamentarium in the near future. In certain ways, this may reflect one of the few benefits to medical education spurred upon by the COVID-19 pandemic, as a recent article examining the use of mixed-reality technologies during the pandemic found it to be an effective method of learning anatomy with advantages over traditional approaches . Similar findings have also been shown in a previous meta-analysis . Furthermore, the cost of obtaining, storing, and appropriately caring for cadavers can also be costly, especially during the COVID-19 era during which numerous institutions have taken the precautionary step of ceasing acceptance of cadaver donations. Virtual educational tools may help account for such shortages and decrease costs associated with conducting anatomy education. While virtual dissection as a supplement to traditional cadaveric dissection appears to be a promising direction for anatomy education, our findings that most course directors intend to revert back to their pre-COVID curriculum indicate that virtual software, in its current form, is an insufficient substitute for cadaveric dissection. Thus, an increased emphasis on virtual learning should be incorporated with caution to ensure there are no negative tradeoffs in education with this approach. Limitations This study had several limitations. First, we were unable to collect responses from 19% of institutions, and there are medical schools in the US beyond those that are members of the AAMC, most notably osteopathic institutions. Thus, our findings may not be fully reflective of anatomy education in the US at large. However, to our knowledge, our response rate of 80% is the highest among similar survey-based studies in anatomy education. Our survey asked about the weekly time commitment of didactics and interactive learning, and we did not ask about total course hours, which could have provided a useful metric. Previous authors have noted calculating total course hours for an anatomy course to be laborious for course directors to estimate, especially for those in integrated curriculums, and a potential reason for their low response rates . Thus, we additionally asked course directors to estimate the relative split between time dedicated to lecture and interactive learning. Furthermore, our survey did not include questions about the course directors themselves, including age, experience, and educational background. Differences across these factors could lead to differences in opinion and should be considered in future studies. Lastly, the COVID-related subjective questions were answered by the course director of each institution, which may be biased by personal opinion and not necessarily reflective of students’ learning experiences. While a more comprehensive survey would also consider student experiences, many students would not have a non-COVID era anatomy course to compare their experience to, and thus we decided that course directors who inherently have a more longitudinal perspective would be most appropriate to survey.
This study had several limitations. First, we were unable to collect responses from 19% of institutions, and there are medical schools in the US beyond those that are members of the AAMC, most notably osteopathic institutions. Thus, our findings may not be fully reflective of anatomy education in the US at large. However, to our knowledge, our response rate of 80% is the highest among similar survey-based studies in anatomy education. Our survey asked about the weekly time commitment of didactics and interactive learning, and we did not ask about total course hours, which could have provided a useful metric. Previous authors have noted calculating total course hours for an anatomy course to be laborious for course directors to estimate, especially for those in integrated curriculums, and a potential reason for their low response rates . Thus, we additionally asked course directors to estimate the relative split between time dedicated to lecture and interactive learning. Furthermore, our survey did not include questions about the course directors themselves, including age, experience, and educational background. Differences across these factors could lead to differences in opinion and should be considered in future studies. Lastly, the COVID-related subjective questions were answered by the course director of each institution, which may be biased by personal opinion and not necessarily reflective of students’ learning experiences. While a more comprehensive survey would also consider student experiences, many students would not have a non-COVID era anatomy course to compare their experience to, and thus we decided that course directors who inherently have a more longitudinal perspective would be most appropriate to survey.
Our study highlights the state of anatomy medical education in the United States during immediate pre- and mid-COVID-19 time points, characterizes adaptations made to accommodate the pandemic, and reports on potential directions of future curricula. We found an increasing adoption of new approaches to didactics and online interactive learning modalities that may be appropriate substitutions for traditional methods in some cases. Lastly, our analysis of course director experiences and opinions indicate the importance of maximizing interactive learning during a period in which anatomy course time has been decreasing.
Additional file 1. Survey Instrument.
|
Intestinal obstruction caused by disseminated mycobacterium avium complex disease following solid organ transplantation: a case report | 6bc54d44-8a19-40ce-961a-92a2298fd86e | 11762457 | Surgical Procedures, Operative[mh] | Mycobacterium avium complex (MAC) is the most common causative pathogen of non-tuberculous mycobacterial infection (NTM), which mainly affects the lungs. Disseminated MAC disease can occur in immunocompromised individuals, such as those with acquired immunodeficiency syndrome (AIDS), hematological malignancies, and those who are positive for anti-interferon gamma (IFN-γ) antibodies. However, little is known about the occurrence of NTM in recipients of solid organ transplantations. Disseminated MAC disease often involves the duodenum as an entry point but rarely forms massive lesions. Herein, we present a case of disseminated MAC disease following liver transplantation, which resulted in an obstructive mass in the intestinal tract that required differentiation from a malignant tumor.
The patient was a 76-year-old woman who had undergone living-donor liver transplantation 15 years prior in order to treat primary biliary cirrhosis. Her post-transplant immunosuppressive regimens had included prednisolone, tacrolimus, and mycophenolate mofetil. Five years prior, she was diagnosed with pulmonary MAC disease following the identification of a granular shadow on computed tomography (CT) and multiple positive sputum cultures for Mycobacterium avium . There was no worsening of the shadows or the appearance of cavitary lesions in her lung fields. Therefore, she was managed with a watchful waiting approach . She had undergone permanent pacemaker implantation to treat sick sinus syndrome, multiple surgeries to treat bilateral breast cancer and uterine fibroids, and had been diagnosed with diabetes mellitus. She had no history of smoking or frequent exposure to soil or unsanitary water. Three months before hospital admission, the patient experienced persistent fever and occasional vomiting. Chest CT revealed worsening granular shadows in the right middle and lower lobes and the left lingular lower lobe. Abdominal CT revealed new thickening of the small intestinal wall at the jejuno-jejunal anastomosis site that had been constructed during her liver transplantation procedure, as well as enlargement of multiple mesenteric lymph nodes. A positron emission tomography-CT indicated increased uptake (maximum standardized uptake value = 13.8) at these sites (Fig. a). Endoscopy of the small intestine revealed an elevated lesion with circumferential ulcers at the jejuno-jejunal anastomosis site, accompanied by intestinal stenosis (Fig. b). A primary malignant lymphoma of the small intestine was initially suspected, prompting a biopsy. Persistent fever led to hospital admission because of nausea, vomiting, and poor oral intake. During her hospital stay, the patient’s body mass index was 17.5 kg/m². She exhibited a fever of 38.5 °C, heart rate of 81 beats per minute, blood pressure of 103/48 mmHg, oxygen saturation (SpO 2 ) of 97% on room air, and a respiratory rate of 31 breaths per minute. Chest auscultation revealed no abnormalities in her heart or lung sounds, and a soft mass was palpable in the left midline of her abdomen. Her white blood cell count was within normal limits (3,800/µL) but showed an elevated neutrophil percentage of 86.0%. Her actual CD4-positive T-cell count was 1,809/µL. Her C-reactive protein level was 6.61 mg/dL, and her serum was positive for procalcitonin. Liver function tests were normal (total bilirubin, 0.5 mg/dL; aspartate aminotransferase, 23 U/L; alanine aminotransferase, 14 U/L; lactate dehydrogenase, 165 U/L; gamma-glutamyltransferase, 28 U/L; alkaline phosphatase, 79 U/L). Her renal function was normal (blood urea nitrogen, 10 mg/dL; creatinine, 0.51 mg/dL). Her soluble interleukin-2 receptor level was elevated (4,975 U/mL). Tests for human immunodeficiency virus antibodies and anti-IFN-γ neutralizing autoantibodies were negative (Table ). Pathological findings from a small intestine biopsy showed diffuse infiltration and clustering of CD68-positive histiocyte-like cells within the lamina propria of the mucosa, interspersed with numerous neutrophils. No caseous necrosis or granulomas were observed, and the cytoplasm contained numerous acid-fast bacilli, as confirmed by positive Ziehl-Neelsen staining (Fig. c). Blood cultures obtained on admission showed M. avium , confirming a diagnosis of disseminated MAC. Furthermore, pathological examination of a liver biopsy revealed a non-caseating granuloma, and cultures yielded M. avium. Treatment with oral ethambutol (15 mg/kg), azithromycin (250 mg/day), and rifabutin (7.5 mg/kg), supplemented with intravenous amikacin (10 mg/kg), was initiated. Despite this ongoing treatment, the patient’s fever persisted. Given the possibility of poor drug absorption caused by her intestinal condition, the route of azithromycin administration was switched to intravenous on day 55, and levofloxacin (500 mg/day) was also administered. Following treatment adjustment, her fever gradually subsided. However, her bowel lesions did not improve, and the ileus symptoms persisted. Additionally, due to the onset and progression of hepatic dysfunction, as well as the findings of cholestasis observed in liver biopsy, a percutaneous transhepatic cholangio-drainage tube was placed for bile drainage at the site of the small intestinal stenosis. Moreover, a percutaneous enteral tube was placed for stable nutritional infusion and drug administration. Surgical resection was not recommended because of the need for massive intestinal resection in addition to severe adhesions due to pancreatic fistula after liver transplantation. Importantly, endoscopic dilation was also considered, but fortunately, antibiotic treatment improved the stenosis; subsequently, oral intake was resumed. The patient was discharged on day 207 of her hospitalization. Her intestinal stenosis has been gradually improving, and she continues to receive antimicrobial treatment and enteral nutrition through outpatient care.
This case highlights the following two key findings: first, disseminated MAC infections can manifest after liver transplantation, and second, they can present with occupying lesions large enough to cause intestinal obstruction capable of mimicking malignancy. Disseminated MAC infections are predominantly observed in patients with AIDS and are relatively rare in solid organ transplant recipients . Among transplant recipients, those with liver transplants have a lower incidence of non-tuberculous mycobacterial infections than do those with kidney or lung transplants , likely because of the less intensive immunosuppressive therapy generally used . Conversely, regarding lung transplantation, it has been reported that 11 out of 92 patients with pulmonary NTM prior to lung transplantation developed disseminated disease after the procedure. This suggests that NTM colonizes many patients before transplantation, and having pulmonary NTM in solid organ transplant recipients may increase the risk of progression to disseminated NTM . In this case, it is considered that the patient’s pre-existing pulmonary MAC infection, immunosuppressive therapy following liver transplantation, and advanced age contributed to the development of disseminated MAC infection. Notably, disseminated MAC can occur as a serious late complication, even in patients who have been stable for extended periods following transplantation. Regarding its intestinal manifestations, lesions in disseminated MAC typically develop in the duodenum and present as ulcers or inflammatory lesions . A previous report attributed intestinal obstruction to disseminated MAC ; however, that case involved patients with prior histories of recurrent intestinal obstruction, in whom a direct causal relationship with MAC lesions could not be definitively established. By contrast, our case had endoscopic confirmation of obstruction caused by occupying lesions. Histopathological analysis did not reveal epithelioid granulomas but did show aggregates of CD68-positive histiocyte-like cells loaded with intracellular mycobacteria, thus confirming direct MAC infiltration . Typically, macrophage activation leads to the formation of granulomas as a containment strategy that prevents the formation of large occupying lesions . However, in this case, it was hypothesized that the patient’s immunosuppressed state hindered the formation of granulomas, leading to excessive bacterial proliferation and an overabundance of reactive histiocytes within the tissue. Initially, our clinical and radiological findings led to a tentative diagnosis of malignant intestinal lymphoma. However, the diagnostic process, which included cultures and biopsies from multiple organs, facilitated an accurate diagnosis. This case highlights the critical need for increased awareness of atypical infectious presentations in immunosuppressed patients that may closely mimic more conventional pathologies, such as malignancy. We presented a case of disseminated MAC infection following liver transplantation. It resulted in the formation of a tumorous lesion that then led to intestinal obstruction. This case highlights the importance of recognizing disseminated MAC as a potential posttransplant complication. Thorough consideration of disseminated MAC should be included in the differential diagnosis for such cases, particularly those with atypical presentations that mimic malignancy.
|
Medical students’ perceptions of integrating social media into a narrative medicine programme for 5th-year clerkship in Taiwan: a descriptive qualitative study | 83bb18f6-ad55-436e-b18b-0e2c5a76f783 | 10949758 | Patient-Centered Care[mh] | In the ever-evolving landscape of medical education and patient care, the intersection of social networking and medical humanities has raised complex questions and opportunities. The field of medical humanities represents a vital bridge between the clinical aspects of healthcare and the broader human experiences that define the patient-provider relationship . As medical education evolves to encompass a more holistic approach , the integration of social media platforms into medical humanities education has emerged as a promising avenue for enriching the learning experience . The global outreach and ease of information sharing on social media platforms can enrich the humanities discourse , fostering cross-cultural connections and facilitating valuable discussions . However, this shift has introduced a nuanced interplay of effects on humanities pedagogic values . While social networking may potentially contribute to a key approach to medical humanities education; enhancing reflection and collaboration in learning , the fundamentals in integrating social media into medical humanities context are blurred and contradictive . For example, the medical humanities often emphasize narrative medicine (NM), which involves listening to and understanding patients’ stories . While social networking platforms can serve as a stage for these narratives, they also have the capacity to foster superficial or unfinished storytelling , hindering the cultivation of essential skills among healthcare students and professionals, as they may not develop the ability to fully comprehend and appreciate the complexity of patient narratives. Meanwhile, there are challenges when integrating the tools into medical humanities education, including the superficial nature of many interactions , the potential for distraction, and the risk of creating echo chambers present challenges to the development of critical thinking and in-depth analysis in medical humanities education . They are important issues for educators and students in preserving and enhancing the quality of medical humanities pedagogy in the digital age. As the rapid integration of technology and social media into healthcare and education, a thoughtful examination of the impact on the humanistic aspects of medical practice and education is needed, According to social learning theory , the social aspect of learning is central, with interactions between individuals, peers, and the learning context shaping cognition and behavior, which includes knowledge exchange and cultural understanding. Integrating social networks into learning aligns with key elements of this theory, involving individual learners, peers, and situations that influence learning outcomes . Furthermore, it encourages self-regulation in learning, prompting individuals to actively acquire and organize knowledge. Integrating social media such as Facebook has demonstrated benefits among students’ learning outcomes, including developing a sense of social learning and engagement within communities . Furthermore, these platforms hold the potential to enhance students’ motivation through meaningful social connections , promote collaborative learning experiences , contribute to overall academic improvement by facilitating immediate and frequent feedback and sustained engagement . Medical students use Facebook informally to enhance their learning and undergraduate lives and enable medical student students to create a supportive learning community amongst their peers . A case study that involved the total 1749 medical student population found that 54.5% students were either using or open to using Facebook for educational purposes . Notably, 27.7% of students using Facebook for educational reasons specifically utilized its ‘groups’ feature . In addition, students typically using Facebook as part of their daily routines to engage in communication with their peers , imply that students see value in using Facebook as a means of communication and collaboration. This level of engagement may potentially enrich their learning experiences and foster a sense of community . It reinforces the idea that integrating Facebook or similar social media platforms into educational contexts have the potential to support and enhance students’ educational journeys by aligning with their preferred modes of communication and interaction. NM is an offspring of literature, medicine, and patient-centred care . The content of NM in the medical humanities pedagogy is to prioritize the learning activities that promote reflective thought, writing through self-reflection, and narrative writing . Thus medical professionals’ main focus is on the patient’s quality of care including multidimensional aspects such as biological, psychological, social, and emotional . Therefore, medical students are required to develop their soft skills as future doctors have problem-solving skills, communication skills, and other skills that support their professional development . In accordance with the constructivist theoretical framework for incorporating social media into medical education , this approach enables educators and students to interact and apply the learning process in more imaginative and innovative ways. These include fostering active engagement between learners and instructors, reducing the role of teachers as learners collaborate on group projects, fostering enhanced problem-solving skills, promoting self-directed learning, providing avenues for learners to engage in reflective thinking, and tailoring the learning environment to authentic contexts by utilizing problem-based and case-based materials . Despite the fact that social media has delivered considerable advantages and added value to educational initiatives within the medical humanities field, it has augmented conventional medical humanities education , improved communication and broadened accessibility , promoted collaborative teamwork , and increased students’ exposure to real-world practices and expert insights through enhanced interactivity . Several studies conducted in Western countries have reported results on the medical student population , highlighting the use of Facebook as a platform for sharing experiences in analyzing patients’ narrative stories . These studies underscore the importance of raising awareness about the development of students’ professional identity and recognizing the role of social media in their lives, along with their responsibilities to future patients and the medical profession . To the best of our knowledge, there has been no exploration of the incorporation of social media, particularly in the context of NM, within the Asian medical humanities landscape. Consequently, our study is focused on assessing the effectiveness, barriers, and perceptions of integrating a social media -Facebook- as a social network into a NM program for 5th-year clerkship students in Taiwan.
Research context The NM programme was conducted for a total of 12 weeks during the students’ clerkship in the Department of Internal Medicine, Chang Gung Memorial Hospital (CGMH) Linkou branch. A total of about 75 medical students in their 5th-year medical study joined the programme in each semester. The programme comprised a series of workshops, lecture classes, small group discussions, and story-telling. Additionally, we integrated the Facebook as a medium to facilitate students in the NM programme activities (Table ). The rationale of this programme was to encourage teachers and trainees in fulfilling the aims of medical humanities education. It sought to advance medical humanities education by nurturing a sense of trust between mentors and trainees and by creating designated time for trainees to reflect on their clinical experiences. For integration of social media, the content of material posted by teachers on Facebook include students’ narrative stories, teachers’ responses, and humanities materials, et al. The students were invited into the Facebook Narrative Medicine group in a close system and encouraged to leave comments without compulsion. For the integration of the Facebook into the programme, the objective was to evaluate how medical students perceive the effectiveness of incorporating social media into medical humanities education. To facilitate this, a narrative approach was adopted, encouraging medical students to write about their day-to-day clinical interactions, challenges, and achievements with patients . Methodological orientation The research in this study utilized a descriptive qualitative study approach, which is geared toward offering a straightforward depiction of a phenomenon , with the specific goal of informing the enhancement of programme interventions. The data was transcribed verbatim and analysed inductively. Sampling We used a purposive sample technique to ensure a wide range of participants’ experiences, backgrounds, and attitudes . The sample size was determined via theoretical saturation: we continued to recruit new participants until no new code was identified . Method of approach We recruited participants from the School of Medicine at Chang Gung University, Taiwan. We focused on the 5th-year medical students as our study population since they had received the complete NM programme in their 5th clerkship. We involved the secretary of the NM programme and the teachers in charge to inform them about the research plans and activities students publicly. The teachers in charge allowed research members to inform students about the integration of Facebook in the NM programme alongside specific rules for the Facebook activities. These rules included how to react, write and reflect, and comment on any content in the Facebook that were uploaded by research team members and teachers. During the research execution, teachers and research members posted content related to NM on the Facebook to encourage students’ interaction and discussion on the platform. The secretary of the NM programme in the Department of Internal Medicine and research team member (BLH) then announced publicly to the students about the recruitment of research participation. With students’ consent, allowed team members to contact them via email or phone in recruiting participants for the focus group interview and arranged their interview schedule. Informed consent was provided and small monetary rewards (NT dollars 250) was offered for participation. BLH was recruited the participants on a rolling basis until data saturation was reached or no new codes from the data analysis. Participants Seventeen participants were initially recruited for the study. However, one participant had to withdraw due to scheduling conflicts, resulting in a total of sixteen medical students being interviewed. The students were organized into four focus group discussions based on their availability. Transcriptions of the interviews yielded four transcripts. All participants were of Taiwanese nationality, aged between 22 and 26 years old, and comprised of 7 females and 9 males. Setting and recruitment This study was approved by the Institutional Review Board (Chang Gung Medical Foundation Institutional Review Board, CGMF-IRB) with certification of approval (202000437B0) for this Facebook activity involving students. The potential risks related to sharing stories or interacting online were mitigated under close Facebook group. Any strategies used to protect user privacy and confidentiality in the online environment were regulated and supervised by the CGMF-IRB. All methods were performed under the relevant guidelines and regulations. Researchers positioning Our team consists of six members with diverse academic backgrounds and ethnicities. YSM, a female Indonesian research assistant, holds a Master of Science in physical therapy and public health. Proficient in interviews, data analysis, and qualitative software, she has made significant contributions to various qualitative studies. BLH, a male Vietnamese research assistant, holds a Master of Arts degree and is actively involved in linguistic and humanistic research, previously using qualitative study designs. CDH, CCJ, and TYW are male Taiwanese researchers with PhD and MD credentials who have actively conducted and contributed to research in the field of Medical Humanities. Data collection Semi-structured focus group interviews were conducted to explore students’ perceptions and experiences of the social media integrated into the NM programme. We developed an interview outline based on our research questions and literature review related to this study (see appendix ). Each interview lasted approximately 60 min. The interview was conducted in a quiet room in our medical center by BLH in English. Since some participants were more comfortable in Chinese answering the questions, one professional interpreter was involved in the interview to help the interviewer to get a better understanding and break the language barriers between the interviewer and participants. The relationship between the interviewers and participants was independent during the interviews with no power asymmetry. Negative effects on participants were minimized since the interviewer was not involved in any academic or professional activities with the participants. All interviews were audio-recorded, transcribed verbatim, and anonymized. All interviews were conducted in a single session, with no repeat interviews conducted. Participants were codified as STUDENT [number] respectively as their turn to speak at the beginning of the interview. Transcripts were codified as D [number] respectively as transcribed code in the analysis process. Data analysis Data were managed and coded using the qualitative analysis software package ATLAS. ti version 9.0. Verbatim transcribing of the data in English was undertaken by YSM and BLH immediately after data collection. For data in Chinese, the translation process (from Mandarin to English was performed by a professional translator and carefully evaluated for translation accuracy by bilingual research members (CDH, CCJ, and TYW). We used descriptive thematic analysis to analyze the data. Thematic analysis is a method for identifying, analyzing, and reporting patterns (themes) within data . This kind of analysis is identifying and building up an analysis in a coherent matter (immersion) . The data were analyzed inductively. The data analysis began with YSM and BLH familiarizing themselves with the data through reading, reviewing, and re-reading the transcripts independently and noting down the meaning of each quote to generate initial codes, CDH and YSM was then subsequently collating codes to the potential themes, defined, and named the themes. The final process of the data analysis involved all research team members CDH, YSM, CCJ, BLH and TYW who reviewed, refined, and discuss themes and subthemes representing the whole data. Moreover, the team members actively considered the notion of data saturation. Any discrepancies were meticulously addressed through collaborative discussions until consensus was reached, ensuring the completion of the report . Member checking was done through shared the final results representing participants’ quotes to participants via email to confirm the accuracy of data interpretation . Trustworthiness and rigor The quality of our current study was assessed by core criteria identified by Lincoln and Guba . The criteria include credibility, transferability, and confirmability. We used investigator triangulation in the whole process; involving one interviewer, BLH, two coders YSM and BLH, and all research members (YSM, BLH, CDH, TYW, CCJ, and CHH) were involved in the data analysis and writing of the report thus increasing the credibility and rigor of the whole analysis process. In this activity, codes and themes were continuously examined and discussed by the research team to ensure consistency. All research members were met frequently on an agreement basis, either online or face-to-face meetings for peer debriefing and progress reports. Lastly, we confirmed that our data analysis accurately reflected participants’ experience of the integration of social media into the programme through member checking by YSM via email with participants to ensure the final coding results were accurately reflected participants’ perception and experience in the integration social media – Facebook – into the NM programme.
The NM programme was conducted for a total of 12 weeks during the students’ clerkship in the Department of Internal Medicine, Chang Gung Memorial Hospital (CGMH) Linkou branch. A total of about 75 medical students in their 5th-year medical study joined the programme in each semester. The programme comprised a series of workshops, lecture classes, small group discussions, and story-telling. Additionally, we integrated the Facebook as a medium to facilitate students in the NM programme activities (Table ). The rationale of this programme was to encourage teachers and trainees in fulfilling the aims of medical humanities education. It sought to advance medical humanities education by nurturing a sense of trust between mentors and trainees and by creating designated time for trainees to reflect on their clinical experiences. For integration of social media, the content of material posted by teachers on Facebook include students’ narrative stories, teachers’ responses, and humanities materials, et al. The students were invited into the Facebook Narrative Medicine group in a close system and encouraged to leave comments without compulsion. For the integration of the Facebook into the programme, the objective was to evaluate how medical students perceive the effectiveness of incorporating social media into medical humanities education. To facilitate this, a narrative approach was adopted, encouraging medical students to write about their day-to-day clinical interactions, challenges, and achievements with patients .
The research in this study utilized a descriptive qualitative study approach, which is geared toward offering a straightforward depiction of a phenomenon , with the specific goal of informing the enhancement of programme interventions. The data was transcribed verbatim and analysed inductively.
We used a purposive sample technique to ensure a wide range of participants’ experiences, backgrounds, and attitudes . The sample size was determined via theoretical saturation: we continued to recruit new participants until no new code was identified .
We recruited participants from the School of Medicine at Chang Gung University, Taiwan. We focused on the 5th-year medical students as our study population since they had received the complete NM programme in their 5th clerkship. We involved the secretary of the NM programme and the teachers in charge to inform them about the research plans and activities students publicly. The teachers in charge allowed research members to inform students about the integration of Facebook in the NM programme alongside specific rules for the Facebook activities. These rules included how to react, write and reflect, and comment on any content in the Facebook that were uploaded by research team members and teachers. During the research execution, teachers and research members posted content related to NM on the Facebook to encourage students’ interaction and discussion on the platform. The secretary of the NM programme in the Department of Internal Medicine and research team member (BLH) then announced publicly to the students about the recruitment of research participation. With students’ consent, allowed team members to contact them via email or phone in recruiting participants for the focus group interview and arranged their interview schedule. Informed consent was provided and small monetary rewards (NT dollars 250) was offered for participation. BLH was recruited the participants on a rolling basis until data saturation was reached or no new codes from the data analysis.
Seventeen participants were initially recruited for the study. However, one participant had to withdraw due to scheduling conflicts, resulting in a total of sixteen medical students being interviewed. The students were organized into four focus group discussions based on their availability. Transcriptions of the interviews yielded four transcripts. All participants were of Taiwanese nationality, aged between 22 and 26 years old, and comprised of 7 females and 9 males.
This study was approved by the Institutional Review Board (Chang Gung Medical Foundation Institutional Review Board, CGMF-IRB) with certification of approval (202000437B0) for this Facebook activity involving students. The potential risks related to sharing stories or interacting online were mitigated under close Facebook group. Any strategies used to protect user privacy and confidentiality in the online environment were regulated and supervised by the CGMF-IRB. All methods were performed under the relevant guidelines and regulations.
Our team consists of six members with diverse academic backgrounds and ethnicities. YSM, a female Indonesian research assistant, holds a Master of Science in physical therapy and public health. Proficient in interviews, data analysis, and qualitative software, she has made significant contributions to various qualitative studies. BLH, a male Vietnamese research assistant, holds a Master of Arts degree and is actively involved in linguistic and humanistic research, previously using qualitative study designs. CDH, CCJ, and TYW are male Taiwanese researchers with PhD and MD credentials who have actively conducted and contributed to research in the field of Medical Humanities.
Semi-structured focus group interviews were conducted to explore students’ perceptions and experiences of the social media integrated into the NM programme. We developed an interview outline based on our research questions and literature review related to this study (see appendix ). Each interview lasted approximately 60 min. The interview was conducted in a quiet room in our medical center by BLH in English. Since some participants were more comfortable in Chinese answering the questions, one professional interpreter was involved in the interview to help the interviewer to get a better understanding and break the language barriers between the interviewer and participants. The relationship between the interviewers and participants was independent during the interviews with no power asymmetry. Negative effects on participants were minimized since the interviewer was not involved in any academic or professional activities with the participants. All interviews were audio-recorded, transcribed verbatim, and anonymized. All interviews were conducted in a single session, with no repeat interviews conducted. Participants were codified as STUDENT [number] respectively as their turn to speak at the beginning of the interview. Transcripts were codified as D [number] respectively as transcribed code in the analysis process.
Data were managed and coded using the qualitative analysis software package ATLAS. ti version 9.0. Verbatim transcribing of the data in English was undertaken by YSM and BLH immediately after data collection. For data in Chinese, the translation process (from Mandarin to English was performed by a professional translator and carefully evaluated for translation accuracy by bilingual research members (CDH, CCJ, and TYW). We used descriptive thematic analysis to analyze the data. Thematic analysis is a method for identifying, analyzing, and reporting patterns (themes) within data . This kind of analysis is identifying and building up an analysis in a coherent matter (immersion) . The data were analyzed inductively. The data analysis began with YSM and BLH familiarizing themselves with the data through reading, reviewing, and re-reading the transcripts independently and noting down the meaning of each quote to generate initial codes, CDH and YSM was then subsequently collating codes to the potential themes, defined, and named the themes. The final process of the data analysis involved all research team members CDH, YSM, CCJ, BLH and TYW who reviewed, refined, and discuss themes and subthemes representing the whole data. Moreover, the team members actively considered the notion of data saturation. Any discrepancies were meticulously addressed through collaborative discussions until consensus was reached, ensuring the completion of the report . Member checking was done through shared the final results representing participants’ quotes to participants via email to confirm the accuracy of data interpretation .
The quality of our current study was assessed by core criteria identified by Lincoln and Guba . The criteria include credibility, transferability, and confirmability. We used investigator triangulation in the whole process; involving one interviewer, BLH, two coders YSM and BLH, and all research members (YSM, BLH, CDH, TYW, CCJ, and CHH) were involved in the data analysis and writing of the report thus increasing the credibility and rigor of the whole analysis process. In this activity, codes and themes were continuously examined and discussed by the research team to ensure consistency. All research members were met frequently on an agreement basis, either online or face-to-face meetings for peer debriefing and progress reports. Lastly, we confirmed that our data analysis accurately reflected participants’ experience of the integration of social media into the programme through member checking by YSM via email with participants to ensure the final coding results were accurately reflected participants’ perception and experience in the integration social media – Facebook – into the NM programme.
Six main themes were derived from the data analysis: (1) Positive experiences of social media integration; (2) Negative experiences of social media integration; (3) Barriers on writing and sharing NM stories in social media; (4) Barriers on reading NM stories in social media; (5) Barriers on reacting contents in social media; (6) Suggestions for future improvement. Theme 1: positive experiences of social media integration Subtheme 1–1 Facebook group facilitates student to streamlines thought-to-writing “…it (a Facebook) can record what we have said or else. Because sometimes you can’t figure out what you want to say (when writing up the Narrative story or experiences). …When faced with the challenge of expressing experiences, the platform allows for quick organization and clarity, ensuring our narratives are captured effectively.” (S3-D4). Subtheme I–2 enhancement of sharing others’ narrative articles by integration of Facebook “I think Facebook simplifies the process of discovering and sharing articles within our Narrative Medicine course, fostering a seamless exchange of insights.” (S1-D1). Theme 2: negative experiences of social media integration Subtheme 2–1 easily to get distraction “I think we get distracted easily, we want to watch or see other things online …we don’t spend time on study of medicine and prefer logging out or playing games, cause it more attractive for us” (S1-D2). “…there are many things to distract our attention while online.” (S1-D3). Subtheme 2–2 Facebook cannot replace the face-to-face class “I don’t think that the group of narrative medicine can replace the real class. Yes, the real class could be more impressive, you can hear the experiences, emotions, and feelings of other classmates, and they can talk to you face-to-face. But, if you just look at the article or just read of some students’ articles, I don’t think it is fulfilled enough.” (S4-D3). “I think if the social media is integrated with the online courses for biomedical knowledge, it is very good and yes, but for narrative medicine, I think the person-in-person experience by face-to-face class is more important.” (S2-D1). Subtheme 2–3 suboptimal for interacting with the NM content “…some senior doctors don’t have much time to use this Facebook to type their idea or what story they encounter. So, probably they shared a lot of the story when we have person-to-person interaction, they don’t share much their story or opinions on the Facebook .” (S1-D1). Subtheme 2–4 unmet students’ expectations “We have an online system called E-learning, and teachers post materials on that system and we can download it. So, when they informed us that we will use the Facebook, I expected that they will do something, like more personal interaction or ask each other opinions. But, it seems that the function is just the same as our E-learning system.” (S2-D3). “I expected that whether there can be some skills or ways for us or to teach us on how to communicate with the patients.” (S2-D4). “What I expected that teachers will share videos or tutorials to practice narrative medicine and how to communicate with patients.” (S3-D4). Theme 3: barriers on writing and sharing NM stories in social media Subtheme 3–1 unable to meet the tasks’ demands “Actually, we have some narrative stories during our practice in the hospital. But not every time we can write it into five hundred words of essay or paragraphs. Sometimes it is just two or three sentences and we shared it when we meet our colleagues on the way to the hospital or at night.” (S2-D3). “…one of our classmates did not submit his homework of the narrative medicine story writing, because he said that he didn’t have any impressive or unforgettable moment related to the medical humanities practice.” (S3-D3). “…people have a lot of feelings and ideas, but only a few people are able to change them into words.” (S5-D3). Subtheme 3–2 writing doubt deters platform sharing …if you think that your writing is not good enough, you would prefer not to post it in a Facebook.” (S3-D3). Theme 4: barriers on reading the NM stories in social media Subtheme 4–1 lack of time and supervision from teachers “Actually, because the teacher didn’t know whether we read it or not. So, I think I don’t need to read it.” (Student added) “… and we do not have much time to see it. Besides, I can learn it face-to-face. So, I didn’t take notes at all. …I didn’t absorb any knowledge from the article.” (S1-D3). “It’s hard to finish or to read all articles, and I think there are even people older than us (who has busier schedule) that many of us don’t have time to finish all of the articles.” (S1-D1). Subtheme 4–2 doubt emerges when stories diverge from personal or observed experiences “Some stories may not attractive for me. I see some stories make me think that it is fake or not … yeah, because in my experience I didn’t see his or her experience like what they written up before even when I see my teacher in a clinical presentation, it made myself have a suspicious thought for these kinds of articles.” (S3-D2). Theme 5: barriers on reacting contents in social media Subtheme 5–1 personal factors “I feel shy to post …or leaving comments publicly.” (S1, S5-D3). “…our classmate may not be very comfortable (if we react on their article) and we are embarrassed to leave some message.” (S3-D4). Subtheme 5–2 inter-personal factors “…because we were not to encouraged to share our opinion in front of the teacher…most of the time we just ask about teachers’ opinions and uncommon to share our opinion publicly or share what we learned with them. So, sometimes we are afraid of being criticized by others or just being afraid to say.” (S1-D3) and “…give us little more encouragement.” (S4-D4). Subtheme 5–3 cultural factors “I think culture issue is one thing that causes Asian students like us tend to shy and keep our opinion to ourselves.” (S2-D3). “…we don’t comment much in the Facebook if we are not familiar or very interested with and even on the Facebook fan page that we are interested in we also may not leave any comment like that.” (S1-D2). Subtheme 5–4 technical factors “We use it as a media that teachers announce to us what time to go to the class, the classroom announcement, and how we can join the class.” (S5-D3) and “I only use it to check whether there is some information about the class.” (S1-D1). “If someone posts something new in the narrative medicine group, but we view this group rarely, then the post will not show on our Facebook feeds, so we wouldn’t see it.” (S1-D2) and “…I don’t even know the article in there.” (S1-D1). Theme 6: suggestions for future improvement Subtheme 6–1 feedback and rewards from teachers “…Teachers want us to write down our experiences in the clinical practice, so it can be combined with the form of a literature contest. We can submit the articles and then we can review them with teachers. So, it’s like a competition and the winner can get some prizes …yeah like to encourage us, it can be a small incentive for students.” (S1, S2-D1). “…we can draw a lucky prize or have a gift for the good comments or for students who post the most comments on the articles.” (S1, S3-D2). Subtheme 6–2 content improvement in social media platform “We follow many medical associate groups and they posted some medical knowledge in a very simple way for us to understand. So, I think it’s really important if you want us to have more attention to narrative medicine (through social media) I think more activities or some cooperation with some famous people would be more attractive.” (S1-D2). Subtheme 6–3 social media platform improvement “…someone who posts the contents can be hidden their name instead of show their name on Facebook. I think it would be helpful for us and people to post their experiences and opinions.” (S3-D2). “…anonymous can maybe more stable for this class, at least post should be anonymous. So, when we leave some comments, we are not worried about being criticized .” (S1-D3). “I think maybe they (teachers) can link to another kind of media, …yeah it is more attractive way like connect it with podcasts or some media that can let us share our idea in a more clearly.” (S5-D1).
Subtheme 1–1 Facebook group facilitates student to streamlines thought-to-writing “…it (a Facebook) can record what we have said or else. Because sometimes you can’t figure out what you want to say (when writing up the Narrative story or experiences). …When faced with the challenge of expressing experiences, the platform allows for quick organization and clarity, ensuring our narratives are captured effectively.” (S3-D4). Subtheme I–2 enhancement of sharing others’ narrative articles by integration of Facebook “I think Facebook simplifies the process of discovering and sharing articles within our Narrative Medicine course, fostering a seamless exchange of insights.” (S1-D1).
“…it (a Facebook) can record what we have said or else. Because sometimes you can’t figure out what you want to say (when writing up the Narrative story or experiences). …When faced with the challenge of expressing experiences, the platform allows for quick organization and clarity, ensuring our narratives are captured effectively.” (S3-D4).
“I think Facebook simplifies the process of discovering and sharing articles within our Narrative Medicine course, fostering a seamless exchange of insights.” (S1-D1).
Subtheme 2–1 easily to get distraction “I think we get distracted easily, we want to watch or see other things online …we don’t spend time on study of medicine and prefer logging out or playing games, cause it more attractive for us” (S1-D2). “…there are many things to distract our attention while online.” (S1-D3). Subtheme 2–2 Facebook cannot replace the face-to-face class “I don’t think that the group of narrative medicine can replace the real class. Yes, the real class could be more impressive, you can hear the experiences, emotions, and feelings of other classmates, and they can talk to you face-to-face. But, if you just look at the article or just read of some students’ articles, I don’t think it is fulfilled enough.” (S4-D3). “I think if the social media is integrated with the online courses for biomedical knowledge, it is very good and yes, but for narrative medicine, I think the person-in-person experience by face-to-face class is more important.” (S2-D1). Subtheme 2–3 suboptimal for interacting with the NM content “…some senior doctors don’t have much time to use this Facebook to type their idea or what story they encounter. So, probably they shared a lot of the story when we have person-to-person interaction, they don’t share much their story or opinions on the Facebook .” (S1-D1). Subtheme 2–4 unmet students’ expectations “We have an online system called E-learning, and teachers post materials on that system and we can download it. So, when they informed us that we will use the Facebook, I expected that they will do something, like more personal interaction or ask each other opinions. But, it seems that the function is just the same as our E-learning system.” (S2-D3). “I expected that whether there can be some skills or ways for us or to teach us on how to communicate with the patients.” (S2-D4). “What I expected that teachers will share videos or tutorials to practice narrative medicine and how to communicate with patients.” (S3-D4).
“I think we get distracted easily, we want to watch or see other things online …we don’t spend time on study of medicine and prefer logging out or playing games, cause it more attractive for us” (S1-D2). “…there are many things to distract our attention while online.” (S1-D3).
“I don’t think that the group of narrative medicine can replace the real class. Yes, the real class could be more impressive, you can hear the experiences, emotions, and feelings of other classmates, and they can talk to you face-to-face. But, if you just look at the article or just read of some students’ articles, I don’t think it is fulfilled enough.” (S4-D3). “I think if the social media is integrated with the online courses for biomedical knowledge, it is very good and yes, but for narrative medicine, I think the person-in-person experience by face-to-face class is more important.” (S2-D1).
“…some senior doctors don’t have much time to use this Facebook to type their idea or what story they encounter. So, probably they shared a lot of the story when we have person-to-person interaction, they don’t share much their story or opinions on the Facebook .” (S1-D1).
“We have an online system called E-learning, and teachers post materials on that system and we can download it. So, when they informed us that we will use the Facebook, I expected that they will do something, like more personal interaction or ask each other opinions. But, it seems that the function is just the same as our E-learning system.” (S2-D3). “I expected that whether there can be some skills or ways for us or to teach us on how to communicate with the patients.” (S2-D4). “What I expected that teachers will share videos or tutorials to practice narrative medicine and how to communicate with patients.” (S3-D4).
Subtheme 3–1 unable to meet the tasks’ demands “Actually, we have some narrative stories during our practice in the hospital. But not every time we can write it into five hundred words of essay or paragraphs. Sometimes it is just two or three sentences and we shared it when we meet our colleagues on the way to the hospital or at night.” (S2-D3). “…one of our classmates did not submit his homework of the narrative medicine story writing, because he said that he didn’t have any impressive or unforgettable moment related to the medical humanities practice.” (S3-D3). “…people have a lot of feelings and ideas, but only a few people are able to change them into words.” (S5-D3). Subtheme 3–2 writing doubt deters platform sharing …if you think that your writing is not good enough, you would prefer not to post it in a Facebook.” (S3-D3).
“Actually, we have some narrative stories during our practice in the hospital. But not every time we can write it into five hundred words of essay or paragraphs. Sometimes it is just two or three sentences and we shared it when we meet our colleagues on the way to the hospital or at night.” (S2-D3). “…one of our classmates did not submit his homework of the narrative medicine story writing, because he said that he didn’t have any impressive or unforgettable moment related to the medical humanities practice.” (S3-D3). “…people have a lot of feelings and ideas, but only a few people are able to change them into words.” (S5-D3).
…if you think that your writing is not good enough, you would prefer not to post it in a Facebook.” (S3-D3).
Subtheme 4–1 lack of time and supervision from teachers “Actually, because the teacher didn’t know whether we read it or not. So, I think I don’t need to read it.” (Student added) “… and we do not have much time to see it. Besides, I can learn it face-to-face. So, I didn’t take notes at all. …I didn’t absorb any knowledge from the article.” (S1-D3). “It’s hard to finish or to read all articles, and I think there are even people older than us (who has busier schedule) that many of us don’t have time to finish all of the articles.” (S1-D1). Subtheme 4–2 doubt emerges when stories diverge from personal or observed experiences “Some stories may not attractive for me. I see some stories make me think that it is fake or not … yeah, because in my experience I didn’t see his or her experience like what they written up before even when I see my teacher in a clinical presentation, it made myself have a suspicious thought for these kinds of articles.” (S3-D2).
“Actually, because the teacher didn’t know whether we read it or not. So, I think I don’t need to read it.” (Student added) “… and we do not have much time to see it. Besides, I can learn it face-to-face. So, I didn’t take notes at all. …I didn’t absorb any knowledge from the article.” (S1-D3). “It’s hard to finish or to read all articles, and I think there are even people older than us (who has busier schedule) that many of us don’t have time to finish all of the articles.” (S1-D1).
“Some stories may not attractive for me. I see some stories make me think that it is fake or not … yeah, because in my experience I didn’t see his or her experience like what they written up before even when I see my teacher in a clinical presentation, it made myself have a suspicious thought for these kinds of articles.” (S3-D2).
Subtheme 5–1 personal factors “I feel shy to post …or leaving comments publicly.” (S1, S5-D3). “…our classmate may not be very comfortable (if we react on their article) and we are embarrassed to leave some message.” (S3-D4). Subtheme 5–2 inter-personal factors “…because we were not to encouraged to share our opinion in front of the teacher…most of the time we just ask about teachers’ opinions and uncommon to share our opinion publicly or share what we learned with them. So, sometimes we are afraid of being criticized by others or just being afraid to say.” (S1-D3) and “…give us little more encouragement.” (S4-D4). Subtheme 5–3 cultural factors “I think culture issue is one thing that causes Asian students like us tend to shy and keep our opinion to ourselves.” (S2-D3). “…we don’t comment much in the Facebook if we are not familiar or very interested with and even on the Facebook fan page that we are interested in we also may not leave any comment like that.” (S1-D2). Subtheme 5–4 technical factors “We use it as a media that teachers announce to us what time to go to the class, the classroom announcement, and how we can join the class.” (S5-D3) and “I only use it to check whether there is some information about the class.” (S1-D1). “If someone posts something new in the narrative medicine group, but we view this group rarely, then the post will not show on our Facebook feeds, so we wouldn’t see it.” (S1-D2) and “…I don’t even know the article in there.” (S1-D1).
“I feel shy to post …or leaving comments publicly.” (S1, S5-D3). “…our classmate may not be very comfortable (if we react on their article) and we are embarrassed to leave some message.” (S3-D4).
“…because we were not to encouraged to share our opinion in front of the teacher…most of the time we just ask about teachers’ opinions and uncommon to share our opinion publicly or share what we learned with them. So, sometimes we are afraid of being criticized by others or just being afraid to say.” (S1-D3) and “…give us little more encouragement.” (S4-D4).
“I think culture issue is one thing that causes Asian students like us tend to shy and keep our opinion to ourselves.” (S2-D3). “…we don’t comment much in the Facebook if we are not familiar or very interested with and even on the Facebook fan page that we are interested in we also may not leave any comment like that.” (S1-D2).
“We use it as a media that teachers announce to us what time to go to the class, the classroom announcement, and how we can join the class.” (S5-D3) and “I only use it to check whether there is some information about the class.” (S1-D1). “If someone posts something new in the narrative medicine group, but we view this group rarely, then the post will not show on our Facebook feeds, so we wouldn’t see it.” (S1-D2) and “…I don’t even know the article in there.” (S1-D1).
Subtheme 6–1 feedback and rewards from teachers “…Teachers want us to write down our experiences in the clinical practice, so it can be combined with the form of a literature contest. We can submit the articles and then we can review them with teachers. So, it’s like a competition and the winner can get some prizes …yeah like to encourage us, it can be a small incentive for students.” (S1, S2-D1). “…we can draw a lucky prize or have a gift for the good comments or for students who post the most comments on the articles.” (S1, S3-D2). Subtheme 6–2 content improvement in social media platform “We follow many medical associate groups and they posted some medical knowledge in a very simple way for us to understand. So, I think it’s really important if you want us to have more attention to narrative medicine (through social media) I think more activities or some cooperation with some famous people would be more attractive.” (S1-D2). Subtheme 6–3 social media platform improvement “…someone who posts the contents can be hidden their name instead of show their name on Facebook. I think it would be helpful for us and people to post their experiences and opinions.” (S3-D2). “…anonymous can maybe more stable for this class, at least post should be anonymous. So, when we leave some comments, we are not worried about being criticized .” (S1-D3). “I think maybe they (teachers) can link to another kind of media, …yeah it is more attractive way like connect it with podcasts or some media that can let us share our idea in a more clearly.” (S5-D1).
“…Teachers want us to write down our experiences in the clinical practice, so it can be combined with the form of a literature contest. We can submit the articles and then we can review them with teachers. So, it’s like a competition and the winner can get some prizes …yeah like to encourage us, it can be a small incentive for students.” (S1, S2-D1). “…we can draw a lucky prize or have a gift for the good comments or for students who post the most comments on the articles.” (S1, S3-D2).
“We follow many medical associate groups and they posted some medical knowledge in a very simple way for us to understand. So, I think it’s really important if you want us to have more attention to narrative medicine (through social media) I think more activities or some cooperation with some famous people would be more attractive.” (S1-D2).
“…someone who posts the contents can be hidden their name instead of show their name on Facebook. I think it would be helpful for us and people to post their experiences and opinions.” (S3-D2). “…anonymous can maybe more stable for this class, at least post should be anonymous. So, when we leave some comments, we are not worried about being criticized .” (S1-D3). “I think maybe they (teachers) can link to another kind of media, …yeah it is more attractive way like connect it with podcasts or some media that can let us share our idea in a more clearly.” (S5-D1).
Integrating social media into a NM programme for 5-year clerkship medical students is a novel approach within the field of medical humanities education. Our findings reveal that this integration has both favorable and unfavorable dimensions, which are significantly shaped by the prevailing learning culture, values, attitudes towards popular culture, individual behavior, and personal choices. When implementing the Facebook in a NM program, medical students recognize its role in facilitating their learning journey. However, they also acknowledge that it cannot serve as a complete substitute for in-person classes. In general, the medical students are accustomed to the university’s internal e-learning system, which enables easy access, utilization, and downloading of all course materials. When the social media is integrated with the online courses for biomedical knowledge, like e-learning system, it could be good as e-learning appears to be at least as effective as traditional instructor-led methods such as lectures . However, NM fundamentally revolves around nurturing the humanistic aspects of medicine, which involve empathizing with and reflecting upon the emotions of others in the context of narratives . Consequently, integrating social media without face-to-face interaction is perceived as less effective in this context , Moreover, the outcomes of NM heavily depend on the development of interpersonal relationships, the cultivation of empathy, and the ability to adopt different perspectives. Thus, face-to-face classes that encourage person-to-person interactions are preferred for NM programme activities over textual content on social media platforms. In addition, the current trend in social media consumption emphasizes platforms that employ audio-visual content, including recorded videos, live streams, creative visuals, and podcasts . It is essential for programme developers, educators, and facilitators to consider these preferences when integrating social media into the learning process for students. Although Line has been the dominant social media platform in Taiwan since it overtook Facebook in 2018, Facebook is still widely used by approximately 90% of Taiwanese people . While Facebook remains one of the most popular social media platforms, the broader culture of social media usage significantly influences our students’ choices and how they incorporate it into their daily lives. Our medical students commonly utilize Facebook for communication and accessing information related to their coursework and class schedules. Consequently, introducing the Facebook into the curriculum as a platform for idea sharing and discussions may not align with their preferences. However, considering the prevalence of other social media tools that have permeated popular culture, integrating these alternative platforms could potentially increase student engagement in the learning process. For instance, the American Academy of Physical Medicine & Rehabilitation hosts “Phyzforum,” a social network designed for sharing ideas, asking questions, and providing comments . Another well-known discussion platform is SERMO, which serves as a valuable tool for healthcare professionals to connect with their peers, enabling them to exchange diverse experiences and strategies related to various medical conditions and an array of peer-to-peer subjects . Our results indicate that it is essential to utilize current platforms that align with students’ requirements and inclinations to achieve favorable results. Our research identified several factors that serve as barriers to the integration of social media into NM programs. These factors encompass personal, interpersonal, cultural, and technical aspects, and they significantly impact students’ engagement in the program. Our findings reveal that students’ online learning behaviors are greatly shaped by their personal values. One noteworthy observation is that students tend to withhold their ideas, particularly when they feel uncertain about their writing abilities. This reluctance to share ideas has noticeable repercussions on students’ motivation as active participants in online learning environments. Their lack of familiarity with publicly commenting and the desire for private spaces to provide feedback to their peers have led to reduced participation in the Facebook, resulting in limited interaction and a lack of relationship development with fellow students and colleagues. For cultural aspects, feelings of shyness, insecurity, and apprehension about receiving negative feedback from peers, specific to Taiwan, underscore the adverse effects of using social media in the learning process. We agree that students’ shyness, insecurity, and concern about receiving negative feedback from peers belong to students’ inner thoughts/expressions. However, Confucianist learning tradition is deeply ingrained in East Asian education . Confucian culture holds considerable sway over various aspects of society and is notably influential in healthcare research and medical education in East Asian countries . The influence of Confucianism on learning styles in medical education warrants examination within the broader context of local cultural influences in East Asia. As a result, a majority of students recommend the incorporation of an anonymous feature within the learning platform when integrating social media into the NM program. This finding aligned with the potential factor associated with students’ focus on teaching online [ – ]. In addition, anonymity was important in the pedagogical implication that affects posting behavior factors, such as online privacy concern , self-consciousness , fear of negative evaluation , trust in the virtual community , and perceived psychological safety , and self-efficacy . This finding implies that allowing for anonymous posts in online discussion boards could potentially enhance student engagement by creating a psychologically secure learning environment that mitigates the influence of self-consciousness and the fear of negative assessment on posting activities. Evaluating and aligning students’ learning requirements and expectations with the available faculty resources and institutional support is crucial for enhancing future programs. The outcomes of this study clearly emphasize the need for an assessment of integrating social media technologies into medical humanities education, particularly in the context of the NM program. This assessment should take into account the preferences of users. To better cater to the needs of medical students in the integration of social media into the medical humanities learning context, improvements in content quality, adoption of current platforms, and program evaluation must be prioritized. Limitation This study is subject to several limitations. Firstly, the study’s participants were confined to 5th-year medical students within a single site. Future research should aim to encompass a more extensive range of medical centers to augment the diversity of participant backgrounds, perspectives, and attitudes towards learning. Secondly, this study exclusively focused on integrating the Facebook into the NM programme. Further research is warranted to delve deeper into the integration of various social media platforms into the context of medical humanities education to gain a more comprehensive understanding.
This study is subject to several limitations. Firstly, the study’s participants were confined to 5th-year medical students within a single site. Future research should aim to encompass a more extensive range of medical centers to augment the diversity of participant backgrounds, perspectives, and attitudes towards learning. Secondly, this study exclusively focused on integrating the Facebook into the NM programme. Further research is warranted to delve deeper into the integration of various social media platforms into the context of medical humanities education to gain a more comprehensive understanding.
The integration of social media into the realm of education is increasingly gaining popularity. However, in Taiwan, using the Facebook as a means to discuss and interact within the context of the NM programme was unfamiliar. This is primarily because the prevailing culture among our learners typically uses Facebook as a medium for accessing course materials and programme-related information. Social media, in general, comes with both positive and negative aspects when applied to the learning process. When incorporating social media into educational practices, it is essential to have a firm grasp of the platforms that participants are currently using. Aligning these tools with students’ values, cultural backgrounds, and learning attitudes becomes a critical consideration in the integration of social media into medical education practices. Furthermore, it is crucial to conduct in-depth research to understand the perspectives of faculty members and educators regarding the utilization of social media in the learning process. This exploration is vital for the development of an enhanced e-learning system that can effectively deliver outcomes in alignment with the curricula within medical humanities education.
Below is the link to the electronic supplementary material. Supplementary Material 1
|
Facilitators and barriers faced by community organizations supporting older adults during the COVID-19 pandemic | 3c7660c3-d489-4c05-87bb-b8f615d70104 | 11951528 | Community Health Services[mh] | Over the course of the COVID-19 pandemic, public health restrictions regarding physical distancing were implemented globally to protect high-risk populations including older adults. These measures were intended to protect older adults but inevitably reduced in-person interactions and community participation, leading to adverse consequences such as social isolation and loneliness among some older adults [ – ]. In Canada, health centres and essential services mostly remained open during the pandemic to provide necessary services either in person or virtually via both internet and telephone supported services. Many social and community organizations, including senior centers (who provide critical non-medical services) were mandated to close in-person services . It could be argued, however, these resources should have been characterized as essential, given that lack of social support can lead to physical decline and poor health . For older adults, community organizations play a critical role in wellbeing by providing services that promote physical health, including chronic illness management and preventive care . These organizations also play a key role in promoting leisure activities, helping older adults maintain a structured routine, and fostering a sense of community [ – ]. Engaging in such activities to maintain physical and cognitive functions is a component of ageing well [ – ]. Thus, community organizations may improve older adults’ quality of life as they allow older adults to develop social networks and combat isolation and loneliness [ – ]. A national survey from the Canadian Mental Health Association notes that many mental health focused community organizations faced challenges delivering services during the COVID-19 pandemic and felt that older adults were particularly disadvantaged in the move to virtual services . Information technology (IT) use during the pandemic increased among older adults and remains on an upward trajectory . Evidence related to the acceptance and perceived usefulness of IT among older adults remains mixed . In a small phenomenological study, older adults who were able to access virtual community centre programs reported that IT was a valuable tool for participating in programs such as book clubs and support groups . Wolman and colleagues reported that virtual offerings were particularly well-received among those with mobility issues but overall, older adults missed in-person programs and ranked online options as ‘second best’ . Unfortunately, the perspectives of older adults who were averse to, or unable to engage in virtual programs were not included in this study by Wolman and colleagues (a notable limitation) . To understand the most efficient and meaningful way to support the health and wellbeing of older adults during the pandemic, we sought to document the efforts that took place amongst community organizations operating in different contexts within the Canadian province of British Columbia (B.C.), to meet the needs of older adults via virtual delivery methods. This paper contributes to the evidence regarding older adult-focused community organizations’ perspectives of utilizing virtual programming to support older adults during this period . Purpose The purpose of this study was to determine the extent to which older adult-focused community organizations moved to online or virtual delivery formats and to document these organizations’ perspectives of the nuances of the barriers and facilitators to moving to a virtual format for both the organizations and older adults. The research questions were: 1) To what extent did older adult-focused community organizations in B.C. move to online or virtual delivery formats to support the social needs of older adults during the COVID-19 pandemic? 2) What are the perceptions of older adult-focused community organizations on the barriers and facilitators to moving to online or virtual delivery formats during the COVID-19 pandemic?
The purpose of this study was to determine the extent to which older adult-focused community organizations moved to online or virtual delivery formats and to document these organizations’ perspectives of the nuances of the barriers and facilitators to moving to a virtual format for both the organizations and older adults. The research questions were: 1) To what extent did older adult-focused community organizations in B.C. move to online or virtual delivery formats to support the social needs of older adults during the COVID-19 pandemic? 2) What are the perceptions of older adult-focused community organizations on the barriers and facilitators to moving to online or virtual delivery formats during the COVID-19 pandemic?
We conducted an environmental scan of community organizations for older adults in one Canadian province. Environmental scanning is a method designed to gather information about an organizational ecosystem related to opportunities and threats, to inform future organizational strategies . It is a suitable method to understand the strengths, weaknesses, and opportunities for improvement within a specific context . The environmental scan method aligned with our study aims and was feasible within the constraints of the study (i.e. during the pandemic with limited resources and the need to generate information using distance-based methods and within short timeframes to be able to quickly intervene) allowing us to understand the full landscape of services offered by older adult-focused community organizations. This study was reviewed and approved by the University of British Columbia Harmonized Research Ethics Board (H20-03120). Setting and sample The study was conducted in the Canadian province of B.C., comprising five geographically and demographically unique regions. We aimed to recruit representatives from a sample of older adult-focused community organizations from all five regions which included urban, rural, remote, and northern locations. We recruited participants until we noted repetition in themes related to our research question. While sampling is discussed in environmental scan methods, there is limited guidance on sample size. We adopted principles of information power and theme repetition which we revisited throughout the data collection process. After 22 interviews, there was consensus on the themes and repetition in the experiences shared by the organizations, and we felt they sufficiently answered the research questions. We sought to conduct four additional interviews to ensure that additional nuances were not missed. Data collection We collected data via internet searches and interviews with key informants. In line with environmental scanning approaches, we used multiple strategies to identify community organizations and key informants which included internet searching and snowball sampling . This process started with searches on Google and Facebook for ‘senior organization’ (senior OR seniors OR older adults) AND ‘British Columbia’. For each result, we collected publicly available information about the organization’s location, purpose, and prospective key informant email addresses and phone numbers. This information was tracked in an Excel spreadsheet and organized by the province’s five public health regions, to ensure geographic coverage (some organizations had a province-wide reach, which was also noted). We expanded the list of organizations by searching a publicly available online directory of older adult-focused organizations called BC211 and snowball sampling with organizations that agreed to participate. We searched the BC211 service by using the selections: older adults and social/recreation, and contacted all organizations who had listed contact information and were operating during the pandemic. We generated a list of 90 organizations, of which 28 were found through web and social media searches, 34 were from the BC211 database, and 26 were identified from snowball sampling. We also contacted one organization for which a representative was known to the first author and another which was profiled in national news coverage for their virtual programming for older adults. We adopted a layered approach to searching, where new results from each additional strategy (web and social media, database, snowball sampling) were added to the spreadsheet. For each organization on the list, we sent a recruitment email to the identified contact person inviting them to participate in the study and followed up with a phone call if no response was forthcoming. We contacted a total of 90 organizations, of which 26 responded and agreed to participate. When a contacted person agreed to participate, we obtained their consent electronically and an author (DD) conducted a telephone or Zoom (a video-conferencing software) interview lasting 45 min on average. Interviews were conducted with an executive director or suitable surrogate (i.e. program manager). We used a semi-structured interview guide that was developed with members of our team who have experience working with older adult-focused community-based services (see supplement for interview sample questions). Interviews were conducted by an experienced qualitative researcher (DD) who scribed the interviews by recording comprehensive notes, and summarizing participant responses in a table format. There is potential for loss of nuance in scribing compared to recording and transcribing. Nonetheless, a 2019 study found that themes identified in qualitative research where scribing was used was highly-consistent with themes identified from traditional recording and transcribing methods . Scribing – whether by a third person as described by Eaton or the interviewer themself – may allow researchers to synthesize data and include salient contextual details in their notes . This method aligns with our study design as it promotes an iterative process; preliminary data analysis during the scribing process informed subsequent interviews . To further promote accuracy, we took the additional step of sending the completed table of scribed notes to participants to be checked for corrections or additions. Participant revisions were typically minor clarifications or addition of details. Because the data for this analysis was the researcher’s paraphrase rather than the participants’ own words, we use only limited verbatim excerpts in presenting the results below. Analysis After the notes were reviewed by participants, they were imported into the latest version of NVivo (released March 2020) for analysis. We undertook an initial cycle of attribute coding for region, user population (seniors vs. all ages), and primary purpose (social, educational, health, or mixed), based on each organization’s public web site and participant responses . Subsequently, two researchers engaged in a reflexive thematic analysis : two authors (DD, KH) engaged in a process of reading and assigning codes, organizing data into themes, and defining and refining themes and sub-themes. The PhD-prepared authors involved in data analysis both have experience conducting qualitative data analysis and experience working with, and volunteering within community organizations. Throughout the analytic process, we developed inductive themes based on the meanings provided by participants . We addressed rigour following the tenets of rigour and trustworthiness described by Thorne . Although rigour is not often explicitly addressed in environmental scanning approaches, we used principles of: (1) interpretive authority; (2) epistemological integrity; (3) analytic logic; and (4) representative credibility . We did so by ensuring our purpose, approach, and methodological decisions were aligned; limiting claims to those which could be supported by the scribed interview data; checking our interpretations of interviews with each participant; and clearly describing the limitations of our study.
The study was conducted in the Canadian province of B.C., comprising five geographically and demographically unique regions. We aimed to recruit representatives from a sample of older adult-focused community organizations from all five regions which included urban, rural, remote, and northern locations. We recruited participants until we noted repetition in themes related to our research question. While sampling is discussed in environmental scan methods, there is limited guidance on sample size. We adopted principles of information power and theme repetition which we revisited throughout the data collection process. After 22 interviews, there was consensus on the themes and repetition in the experiences shared by the organizations, and we felt they sufficiently answered the research questions. We sought to conduct four additional interviews to ensure that additional nuances were not missed.
We collected data via internet searches and interviews with key informants. In line with environmental scanning approaches, we used multiple strategies to identify community organizations and key informants which included internet searching and snowball sampling . This process started with searches on Google and Facebook for ‘senior organization’ (senior OR seniors OR older adults) AND ‘British Columbia’. For each result, we collected publicly available information about the organization’s location, purpose, and prospective key informant email addresses and phone numbers. This information was tracked in an Excel spreadsheet and organized by the province’s five public health regions, to ensure geographic coverage (some organizations had a province-wide reach, which was also noted). We expanded the list of organizations by searching a publicly available online directory of older adult-focused organizations called BC211 and snowball sampling with organizations that agreed to participate. We searched the BC211 service by using the selections: older adults and social/recreation, and contacted all organizations who had listed contact information and were operating during the pandemic. We generated a list of 90 organizations, of which 28 were found through web and social media searches, 34 were from the BC211 database, and 26 were identified from snowball sampling. We also contacted one organization for which a representative was known to the first author and another which was profiled in national news coverage for their virtual programming for older adults. We adopted a layered approach to searching, where new results from each additional strategy (web and social media, database, snowball sampling) were added to the spreadsheet. For each organization on the list, we sent a recruitment email to the identified contact person inviting them to participate in the study and followed up with a phone call if no response was forthcoming. We contacted a total of 90 organizations, of which 26 responded and agreed to participate. When a contacted person agreed to participate, we obtained their consent electronically and an author (DD) conducted a telephone or Zoom (a video-conferencing software) interview lasting 45 min on average. Interviews were conducted with an executive director or suitable surrogate (i.e. program manager). We used a semi-structured interview guide that was developed with members of our team who have experience working with older adult-focused community-based services (see supplement for interview sample questions). Interviews were conducted by an experienced qualitative researcher (DD) who scribed the interviews by recording comprehensive notes, and summarizing participant responses in a table format. There is potential for loss of nuance in scribing compared to recording and transcribing. Nonetheless, a 2019 study found that themes identified in qualitative research where scribing was used was highly-consistent with themes identified from traditional recording and transcribing methods . Scribing – whether by a third person as described by Eaton or the interviewer themself – may allow researchers to synthesize data and include salient contextual details in their notes . This method aligns with our study design as it promotes an iterative process; preliminary data analysis during the scribing process informed subsequent interviews . To further promote accuracy, we took the additional step of sending the completed table of scribed notes to participants to be checked for corrections or additions. Participant revisions were typically minor clarifications or addition of details. Because the data for this analysis was the researcher’s paraphrase rather than the participants’ own words, we use only limited verbatim excerpts in presenting the results below.
After the notes were reviewed by participants, they were imported into the latest version of NVivo (released March 2020) for analysis. We undertook an initial cycle of attribute coding for region, user population (seniors vs. all ages), and primary purpose (social, educational, health, or mixed), based on each organization’s public web site and participant responses . Subsequently, two researchers engaged in a reflexive thematic analysis : two authors (DD, KH) engaged in a process of reading and assigning codes, organizing data into themes, and defining and refining themes and sub-themes. The PhD-prepared authors involved in data analysis both have experience conducting qualitative data analysis and experience working with, and volunteering within community organizations. Throughout the analytic process, we developed inductive themes based on the meanings provided by participants . We addressed rigour following the tenets of rigour and trustworthiness described by Thorne . Although rigour is not often explicitly addressed in environmental scanning approaches, we used principles of: (1) interpretive authority; (2) epistemological integrity; (3) analytic logic; and (4) representative credibility . We did so by ensuring our purpose, approach, and methodological decisions were aligned; limiting claims to those which could be supported by the scribed interview data; checking our interpretations of interviews with each participant; and clearly describing the limitations of our study.
From December 2020 through June 2021, we interviewed 26 contacts from 90 identified community organizations (response rate of 29%). Twenty-five community organizations had moved at least some services to a virtual format. Table summarizes the number and geographic distribution of organizations that were contacted, and the number of those who agreed to participate in the study. Based on interview responses and publicly available website information, we profiled participating community organizations by their constituency (older adult-focused vs. all-ages with some older adult-oriented programming) and purpose (educational, social, health, or mixed) as presented in Table . In our classification, the constituency of older adult-focused community organizations welcomed family caregivers who could be but were not necessarily older adults. We identified three themes (with subthemes) that describe the complexities of shifting existing programs to virtual delivery formats and developing new programs under pandemic conditions from the perspectives of community organizations: (1) challenges for users and programs ; (2) organizational facilitators ; and (3) meeting the challenge . Under each theme we iteratively coded several sub-themes, and this process led us to recognize that ‘barriers’ impacted individuals and organizations alike. For example, community organizations provided unique perspectives on how their users faced barriers accessing services, and how they experienced barriers in providing services. 'Facilitators’ included factors that made it easier for the organization to provide services. The themes and subthemes supported by participants exemplars are described below (see Table for thematic map). I. Challenges for Users and Programs Community organizations described various challenges as they attempted to address the social support and interaction needs of older adults during the pandemic through virtual program delivery. The challenges encompassed three main areas: (1) health and wellbeing; (2) information technology issues; and (3) organizational and personal impacts of program changes. Diminished health and wellbeing Many participants reported that social isolation and loneliness were the most salient challenges reported by their program users. However, they perceived the personal health challenges of their collective users as significant. Participants shared that program users reported dementia, cognitive decline, visual impairment, deafness, psychological trauma not due to but exacerbated by the pandemic (e.g., feelings of confinement among individuals who had experienced trauma), and mental health challenges such as anxiety and depression. Participants described how older adults’ pre-existing challenges were compounded by the pandemic which motivated community organizations to find novel ways to maintain their services via virtual delivery means. Some respondents also noted that members encountered challenges involving access to necessities such as food or housing. Respondents serving linguistically and culturally diverse populations also raised the importance of cultural health and wellbeing. Participants were concerned that program users who spoke first languages other than English were at greater risk for missing information about public health restrictions and vaccinations. Additionally, one participant emphasized that public health measures had harmful implications for Indigenous older adults whose strong connections to family, community, and traditional lands had been disrupted. Information technology issues Challenges related to older adults using IT to connect remotely with community organizations were frequently shared. Older adults’ difficulty learning to use unfamiliar IT devices and platforms was a top reported challenge by community organizations. Participants observed that these difficulties could be exacerbated by ageing-related cognitive decline, memory loss, or dementia. Almost as prominent was lack of access to IT devices or internet connections at home. Within this category, participants emphasized the importance of distinguishing between older adults who seldomly used IT, and those with more familiarity but lost access when public computers or Wi-Fi (e.g., at libraries or cafés) became unavailable due to the pandemic restrictions. Several community organizations struggled to provide IT support for users seeking to get online and use remote programs exacerbated this situation. Another IT challenge described by participants was older adults’ generalized distrust of technologies grounded in concerns about showing one’s home surroundings on video; being spied on through cameras and microphones; and providing one’s financial information in new ways (e.g., paying a registration fee online for an online course). Some participants reported accessibility challenges with common IT device and platform interfaces, for example telephone conferencing platforms that were unintuitive to use, or tablets that were physically difficult to manipulate with reduced dexterity after a stroke. Organizational and personal losses related to program changes Participants described various organizational and personal losses related to the suspension of organized, in-person activities. The most notable loss for program users was the loss of informal social interactions such as chatting before or after a gathering and the loss of an enjoyable routine. Given that many organized activities such as classes were led by volunteers who are themselves older adults, the suspension of gatherings also led to role loss for some, especially if they were unable or unwilling to make the transition to facilitating their activities online. Thus, these program changes had both program-level and personal implications. II. Organizational Facilitators Despite facing many challenges, several organizational attributes and resources facilitated participants’ ability to continue supporting older adults: (1) inter- and intraorganizational relationships; (2) intrinsic qualities of program design; (3) physical affordance of virtual activities; and (4) technological facilitators. Inter- and intra-organizational relationships One of the most frequent and reportedly impactful facilitators for providing successful online activities was collaborations with other older adult-focused community organizations. Cooperation among organizations sharing a similar focus led to exchanging knowledge and best practices, and member referrals. New partnerships with other community organizations resulted in new virtual programming, including virtual tours of museums or art galleries; at-home exercise routines led by yoga studios; and reading clubs co-hosted by libraries. Another prominent form of cooperation was to rely on another, often larger, organization to provide IT training and support for members. Several community organizations interviewed for this study were older adult-focused programs within a larger organization providing services to people of different ages. An example of this relationship was so-called ‘Elder Colleges’, which offer a wide range of courses and educational programs for older adults. A subset of the Elder Colleges were affiliated with a college or university, which provided technological resources such as online registration and scheduling systems, and support for instructors and students to navigate virtual learning platforms. Such supports were invaluable in promoting a smooth transition to online programming. Intrinsic qualities of program design Some participants described a stronger transition to online and hybrid activities due to intrinsic qualities of their existing programs. While multiple modes of communication to connect with members and deliver programming were occasionally utilized before the pandemic, the reliance on these methods over this period was considerably higher. Phone trees, messaging apps, email announcements, and physical mailing allowed for broad advertisement of their programs. Organizations also reported using both video-conferencing platforms and phone-based conferencing systems for program delivery. Organizations that used both modalities for activities reported that the modalities had different strengths and weaknesses and tended to attract different audiences. Video required access to IT devices with cameras and more technical knowledge and support, but enabled activities with a visual component, such as crafting or cooking classes. Telephone activities were simpler to set up for some older adults and provided a balance of privacy and intimacy that many program users appreciated. Another important resource was an existing intergenerational focus within the organization. Established relationships with young people were re-purposed for weekly check-in phone calls and interest-based conversations by Zoom, telephone, or messaging app. Some of these connections also matched youth and seniors who spoke languages other than English. One organization reported a child-led cooking program on Zoom that had proven very popular with older adults. Some organizations also enlisted high school or university students to provide IT support. This approach garnered mixed reviews, one participant noting that young people tended to have difficulty understanding older adults’ needs, and another suggesting that young people who were not family members were well suited to this role. Other program characteristics reported to facilitate uptake of virtual activities included having a wide variety of activities, keeping a regular schedule, and being proactive in contacting members about participation. Interviewed community organizations reported offering a huge range of social and educational programming, including games, arts and crafts, themed conversation circles, courses, book clubs, movie or music nights, sing-alongs, guided meditation, exercise (e.g., chair yoga), and guest speakers on topics of interest to older adults. Some activities were gendered (e.g., a men’s discussion group) and three organizations reported their activities had garnered more interest from women than men. The latter organizations were concerned that this imbalanced recruitment suggested males might be vulnerable to isolation. However, one organization described greater participation in Zoom sessions by men than women and was actively searching for ways to increase women’s participation. Moreover, several organizations mentioned that they needed a sufficient number of participants, typically 4–6 minimum, to make an activity sustainable in terms of invested resources such as staff and volunteer time. Physical resources for virtual activities Although virtual social interaction was the focus of our study, we found that virtual activities frequently relied on material and physical resources. Participants described that external grants were an important form of financial support to address inequities related to older adults’ lack of internet-connected IT devices. However, the application, competition, and reporting requirements associated with external funding were repeatedly described as a significant drain on the resources of smaller organizations, and the comparatively short time scales associated with many grants made long term planning more difficult. Community organizations described donations of IT resources such as smartphones, tablets, and Wi-Fi equipment by businesses in the community as valuable for supporting older adults’ use of IT for social purposes. Interestingly, home computers were not mentioned in any descriptions of received donations and were not prominent in any of the interviews. A few participants commented that computers in older adults’ homes were often obsolete and not useful for joining virtual activities. Another affordance for virtual activities such as crafting, arts, or cooking on Zoom was scheduled drop-offs of physical goods. Volunteers delivered materials for specific arts programs such as cedar weaving or painting. Likewise, cooking sessions were supported by delivery of fresh ingredients. Despite significant logistical factors, community organizations reported that these programs had a significant payoff for member satisfaction and enjoyment. Facilitators of technology use Participants identified many facilitators related to IT. The most salient factors described by participants was access to specialized IT expertise within the organization. This included informal knowledge among leadership team or membership, a volunteer, or less frequently, a dedicated IT staff member. One all-ages community organization notably re-purposed their crew of youth lifeguards who possessed technological skills as phone support for seniors. Volunteers’ willingness to assist activity facilitators with setting up their sessions and helping members join was regarded as more important than specific IT expertise. At other times, volunteers provided one-to-one training and support for members as they familiarized themselves with new IT devices and platforms. Community organizations reported supporting staff and volunteer activity facilitators by offering training in virtual program facilitation. In addition to learning how to navigate new IT platforms, training included a general introduction to best practices for delivering virtual sessions and how to manage interactions with program users during different types of activities (e.g., a conversation group vs. a lecture with Q&A). IT training for both facilitators and program users was greatly enhanced by finding or developing resources for common IT apps, platforms, and processes. In some cases, existing programs on using IT devices were also repurposed to support online activities. According to a couple community organizations, efforts to socialize members into virtual activities benefitted from a technological needs assessment to gauge ability, willingness, and interest to participate. III. Meeting the Challenge Our final theme describes how community organizations responded to the need to rapidly and completely move to virtual programming due to the pandemic. Their insights have implications for future programming. First, community organizations reported increased participation by underserved members in their local communities, which we describe as increased ‘depth’ of reach. Second, community organizations noted new connections with members and other organizations across an expanded geographic range, which we name increased ‘breadth’ of reach. A handful of organizations were confident in the advantages of virtual activities. We note that these changes iteratively informed community organizations’ decisions about virtual and hybrid activities. Increased ‘Depth’ of reach A significant number of community organizations described how the essential transition to virtual programming had enabled them to reach new members within their local community. In some cases, these older adults were homebound due to physical factors including reduced mobility, allergies, or compromised immune systems. In other instances, prospective members had faced transportation barriers related to public transportation, driving long distances, and parking. Community organizations reported that older adults who lived in residential care facilities reportedly found themselves newly isolated when in-person recreation activities at their facility were restricted, or if their facility offered limited/no activities for people with linguistically and culturally diverse needs (e.g., a resident who primarily spoke Japanese in a facility where programming was offered only in English). Many of these previously unreachable older adults, including those who live in residential care facilities, found themselves, with the arrival of virtual programming, able to connect with peers in new and rewarding ways through community organization programming. Organizations lauded this new reach and reported considering keeping such approaches post-pandemic. Increased ‘Breadth’ of reach Almost as many organizations discussed how the online transition had enabled them to attract new participation from outside their previous geographic catchment area. This allowed for collaborations with groups like art museums, musical performances, and individual facilitators of courses, from other areas. Virtual activities also drew participants from distant locations; for example, an Indigenous program user took part in online drum circles from their location in Mexico, and program users who previously only participated in summer activities in B.C. before relocating for the Fall and Winter. Whether or not these members continued their patterns of travel during the pandemic, the ability to take part in programs remotely allowed many program users to participate during new times and in new ways. This new reach was appealing for community groups. Themes are summarized in Fig. .
Community organizations described various challenges as they attempted to address the social support and interaction needs of older adults during the pandemic through virtual program delivery. The challenges encompassed three main areas: (1) health and wellbeing; (2) information technology issues; and (3) organizational and personal impacts of program changes. Diminished health and wellbeing Many participants reported that social isolation and loneliness were the most salient challenges reported by their program users. However, they perceived the personal health challenges of their collective users as significant. Participants shared that program users reported dementia, cognitive decline, visual impairment, deafness, psychological trauma not due to but exacerbated by the pandemic (e.g., feelings of confinement among individuals who had experienced trauma), and mental health challenges such as anxiety and depression. Participants described how older adults’ pre-existing challenges were compounded by the pandemic which motivated community organizations to find novel ways to maintain their services via virtual delivery means. Some respondents also noted that members encountered challenges involving access to necessities such as food or housing. Respondents serving linguistically and culturally diverse populations also raised the importance of cultural health and wellbeing. Participants were concerned that program users who spoke first languages other than English were at greater risk for missing information about public health restrictions and vaccinations. Additionally, one participant emphasized that public health measures had harmful implications for Indigenous older adults whose strong connections to family, community, and traditional lands had been disrupted. Information technology issues Challenges related to older adults using IT to connect remotely with community organizations were frequently shared. Older adults’ difficulty learning to use unfamiliar IT devices and platforms was a top reported challenge by community organizations. Participants observed that these difficulties could be exacerbated by ageing-related cognitive decline, memory loss, or dementia. Almost as prominent was lack of access to IT devices or internet connections at home. Within this category, participants emphasized the importance of distinguishing between older adults who seldomly used IT, and those with more familiarity but lost access when public computers or Wi-Fi (e.g., at libraries or cafés) became unavailable due to the pandemic restrictions. Several community organizations struggled to provide IT support for users seeking to get online and use remote programs exacerbated this situation. Another IT challenge described by participants was older adults’ generalized distrust of technologies grounded in concerns about showing one’s home surroundings on video; being spied on through cameras and microphones; and providing one’s financial information in new ways (e.g., paying a registration fee online for an online course). Some participants reported accessibility challenges with common IT device and platform interfaces, for example telephone conferencing platforms that were unintuitive to use, or tablets that were physically difficult to manipulate with reduced dexterity after a stroke. Organizational and personal losses related to program changes Participants described various organizational and personal losses related to the suspension of organized, in-person activities. The most notable loss for program users was the loss of informal social interactions such as chatting before or after a gathering and the loss of an enjoyable routine. Given that many organized activities such as classes were led by volunteers who are themselves older adults, the suspension of gatherings also led to role loss for some, especially if they were unable or unwilling to make the transition to facilitating their activities online. Thus, these program changes had both program-level and personal implications.
Many participants reported that social isolation and loneliness were the most salient challenges reported by their program users. However, they perceived the personal health challenges of their collective users as significant. Participants shared that program users reported dementia, cognitive decline, visual impairment, deafness, psychological trauma not due to but exacerbated by the pandemic (e.g., feelings of confinement among individuals who had experienced trauma), and mental health challenges such as anxiety and depression. Participants described how older adults’ pre-existing challenges were compounded by the pandemic which motivated community organizations to find novel ways to maintain their services via virtual delivery means. Some respondents also noted that members encountered challenges involving access to necessities such as food or housing. Respondents serving linguistically and culturally diverse populations also raised the importance of cultural health and wellbeing. Participants were concerned that program users who spoke first languages other than English were at greater risk for missing information about public health restrictions and vaccinations. Additionally, one participant emphasized that public health measures had harmful implications for Indigenous older adults whose strong connections to family, community, and traditional lands had been disrupted.
Challenges related to older adults using IT to connect remotely with community organizations were frequently shared. Older adults’ difficulty learning to use unfamiliar IT devices and platforms was a top reported challenge by community organizations. Participants observed that these difficulties could be exacerbated by ageing-related cognitive decline, memory loss, or dementia. Almost as prominent was lack of access to IT devices or internet connections at home. Within this category, participants emphasized the importance of distinguishing between older adults who seldomly used IT, and those with more familiarity but lost access when public computers or Wi-Fi (e.g., at libraries or cafés) became unavailable due to the pandemic restrictions. Several community organizations struggled to provide IT support for users seeking to get online and use remote programs exacerbated this situation. Another IT challenge described by participants was older adults’ generalized distrust of technologies grounded in concerns about showing one’s home surroundings on video; being spied on through cameras and microphones; and providing one’s financial information in new ways (e.g., paying a registration fee online for an online course). Some participants reported accessibility challenges with common IT device and platform interfaces, for example telephone conferencing platforms that were unintuitive to use, or tablets that were physically difficult to manipulate with reduced dexterity after a stroke.
Participants described various organizational and personal losses related to the suspension of organized, in-person activities. The most notable loss for program users was the loss of informal social interactions such as chatting before or after a gathering and the loss of an enjoyable routine. Given that many organized activities such as classes were led by volunteers who are themselves older adults, the suspension of gatherings also led to role loss for some, especially if they were unable or unwilling to make the transition to facilitating their activities online. Thus, these program changes had both program-level and personal implications.
Despite facing many challenges, several organizational attributes and resources facilitated participants’ ability to continue supporting older adults: (1) inter- and intraorganizational relationships; (2) intrinsic qualities of program design; (3) physical affordance of virtual activities; and (4) technological facilitators. Inter- and intra-organizational relationships One of the most frequent and reportedly impactful facilitators for providing successful online activities was collaborations with other older adult-focused community organizations. Cooperation among organizations sharing a similar focus led to exchanging knowledge and best practices, and member referrals. New partnerships with other community organizations resulted in new virtual programming, including virtual tours of museums or art galleries; at-home exercise routines led by yoga studios; and reading clubs co-hosted by libraries. Another prominent form of cooperation was to rely on another, often larger, organization to provide IT training and support for members. Several community organizations interviewed for this study were older adult-focused programs within a larger organization providing services to people of different ages. An example of this relationship was so-called ‘Elder Colleges’, which offer a wide range of courses and educational programs for older adults. A subset of the Elder Colleges were affiliated with a college or university, which provided technological resources such as online registration and scheduling systems, and support for instructors and students to navigate virtual learning platforms. Such supports were invaluable in promoting a smooth transition to online programming. Intrinsic qualities of program design Some participants described a stronger transition to online and hybrid activities due to intrinsic qualities of their existing programs. While multiple modes of communication to connect with members and deliver programming were occasionally utilized before the pandemic, the reliance on these methods over this period was considerably higher. Phone trees, messaging apps, email announcements, and physical mailing allowed for broad advertisement of their programs. Organizations also reported using both video-conferencing platforms and phone-based conferencing systems for program delivery. Organizations that used both modalities for activities reported that the modalities had different strengths and weaknesses and tended to attract different audiences. Video required access to IT devices with cameras and more technical knowledge and support, but enabled activities with a visual component, such as crafting or cooking classes. Telephone activities were simpler to set up for some older adults and provided a balance of privacy and intimacy that many program users appreciated. Another important resource was an existing intergenerational focus within the organization. Established relationships with young people were re-purposed for weekly check-in phone calls and interest-based conversations by Zoom, telephone, or messaging app. Some of these connections also matched youth and seniors who spoke languages other than English. One organization reported a child-led cooking program on Zoom that had proven very popular with older adults. Some organizations also enlisted high school or university students to provide IT support. This approach garnered mixed reviews, one participant noting that young people tended to have difficulty understanding older adults’ needs, and another suggesting that young people who were not family members were well suited to this role. Other program characteristics reported to facilitate uptake of virtual activities included having a wide variety of activities, keeping a regular schedule, and being proactive in contacting members about participation. Interviewed community organizations reported offering a huge range of social and educational programming, including games, arts and crafts, themed conversation circles, courses, book clubs, movie or music nights, sing-alongs, guided meditation, exercise (e.g., chair yoga), and guest speakers on topics of interest to older adults. Some activities were gendered (e.g., a men’s discussion group) and three organizations reported their activities had garnered more interest from women than men. The latter organizations were concerned that this imbalanced recruitment suggested males might be vulnerable to isolation. However, one organization described greater participation in Zoom sessions by men than women and was actively searching for ways to increase women’s participation. Moreover, several organizations mentioned that they needed a sufficient number of participants, typically 4–6 minimum, to make an activity sustainable in terms of invested resources such as staff and volunteer time. Physical resources for virtual activities Although virtual social interaction was the focus of our study, we found that virtual activities frequently relied on material and physical resources. Participants described that external grants were an important form of financial support to address inequities related to older adults’ lack of internet-connected IT devices. However, the application, competition, and reporting requirements associated with external funding were repeatedly described as a significant drain on the resources of smaller organizations, and the comparatively short time scales associated with many grants made long term planning more difficult. Community organizations described donations of IT resources such as smartphones, tablets, and Wi-Fi equipment by businesses in the community as valuable for supporting older adults’ use of IT for social purposes. Interestingly, home computers were not mentioned in any descriptions of received donations and were not prominent in any of the interviews. A few participants commented that computers in older adults’ homes were often obsolete and not useful for joining virtual activities. Another affordance for virtual activities such as crafting, arts, or cooking on Zoom was scheduled drop-offs of physical goods. Volunteers delivered materials for specific arts programs such as cedar weaving or painting. Likewise, cooking sessions were supported by delivery of fresh ingredients. Despite significant logistical factors, community organizations reported that these programs had a significant payoff for member satisfaction and enjoyment. Facilitators of technology use Participants identified many facilitators related to IT. The most salient factors described by participants was access to specialized IT expertise within the organization. This included informal knowledge among leadership team or membership, a volunteer, or less frequently, a dedicated IT staff member. One all-ages community organization notably re-purposed their crew of youth lifeguards who possessed technological skills as phone support for seniors. Volunteers’ willingness to assist activity facilitators with setting up their sessions and helping members join was regarded as more important than specific IT expertise. At other times, volunteers provided one-to-one training and support for members as they familiarized themselves with new IT devices and platforms. Community organizations reported supporting staff and volunteer activity facilitators by offering training in virtual program facilitation. In addition to learning how to navigate new IT platforms, training included a general introduction to best practices for delivering virtual sessions and how to manage interactions with program users during different types of activities (e.g., a conversation group vs. a lecture with Q&A). IT training for both facilitators and program users was greatly enhanced by finding or developing resources for common IT apps, platforms, and processes. In some cases, existing programs on using IT devices were also repurposed to support online activities. According to a couple community organizations, efforts to socialize members into virtual activities benefitted from a technological needs assessment to gauge ability, willingness, and interest to participate.
One of the most frequent and reportedly impactful facilitators for providing successful online activities was collaborations with other older adult-focused community organizations. Cooperation among organizations sharing a similar focus led to exchanging knowledge and best practices, and member referrals. New partnerships with other community organizations resulted in new virtual programming, including virtual tours of museums or art galleries; at-home exercise routines led by yoga studios; and reading clubs co-hosted by libraries. Another prominent form of cooperation was to rely on another, often larger, organization to provide IT training and support for members. Several community organizations interviewed for this study were older adult-focused programs within a larger organization providing services to people of different ages. An example of this relationship was so-called ‘Elder Colleges’, which offer a wide range of courses and educational programs for older adults. A subset of the Elder Colleges were affiliated with a college or university, which provided technological resources such as online registration and scheduling systems, and support for instructors and students to navigate virtual learning platforms. Such supports were invaluable in promoting a smooth transition to online programming.
Some participants described a stronger transition to online and hybrid activities due to intrinsic qualities of their existing programs. While multiple modes of communication to connect with members and deliver programming were occasionally utilized before the pandemic, the reliance on these methods over this period was considerably higher. Phone trees, messaging apps, email announcements, and physical mailing allowed for broad advertisement of their programs. Organizations also reported using both video-conferencing platforms and phone-based conferencing systems for program delivery. Organizations that used both modalities for activities reported that the modalities had different strengths and weaknesses and tended to attract different audiences. Video required access to IT devices with cameras and more technical knowledge and support, but enabled activities with a visual component, such as crafting or cooking classes. Telephone activities were simpler to set up for some older adults and provided a balance of privacy and intimacy that many program users appreciated. Another important resource was an existing intergenerational focus within the organization. Established relationships with young people were re-purposed for weekly check-in phone calls and interest-based conversations by Zoom, telephone, or messaging app. Some of these connections also matched youth and seniors who spoke languages other than English. One organization reported a child-led cooking program on Zoom that had proven very popular with older adults. Some organizations also enlisted high school or university students to provide IT support. This approach garnered mixed reviews, one participant noting that young people tended to have difficulty understanding older adults’ needs, and another suggesting that young people who were not family members were well suited to this role. Other program characteristics reported to facilitate uptake of virtual activities included having a wide variety of activities, keeping a regular schedule, and being proactive in contacting members about participation. Interviewed community organizations reported offering a huge range of social and educational programming, including games, arts and crafts, themed conversation circles, courses, book clubs, movie or music nights, sing-alongs, guided meditation, exercise (e.g., chair yoga), and guest speakers on topics of interest to older adults. Some activities were gendered (e.g., a men’s discussion group) and three organizations reported their activities had garnered more interest from women than men. The latter organizations were concerned that this imbalanced recruitment suggested males might be vulnerable to isolation. However, one organization described greater participation in Zoom sessions by men than women and was actively searching for ways to increase women’s participation. Moreover, several organizations mentioned that they needed a sufficient number of participants, typically 4–6 minimum, to make an activity sustainable in terms of invested resources such as staff and volunteer time.
Although virtual social interaction was the focus of our study, we found that virtual activities frequently relied on material and physical resources. Participants described that external grants were an important form of financial support to address inequities related to older adults’ lack of internet-connected IT devices. However, the application, competition, and reporting requirements associated with external funding were repeatedly described as a significant drain on the resources of smaller organizations, and the comparatively short time scales associated with many grants made long term planning more difficult. Community organizations described donations of IT resources such as smartphones, tablets, and Wi-Fi equipment by businesses in the community as valuable for supporting older adults’ use of IT for social purposes. Interestingly, home computers were not mentioned in any descriptions of received donations and were not prominent in any of the interviews. A few participants commented that computers in older adults’ homes were often obsolete and not useful for joining virtual activities. Another affordance for virtual activities such as crafting, arts, or cooking on Zoom was scheduled drop-offs of physical goods. Volunteers delivered materials for specific arts programs such as cedar weaving or painting. Likewise, cooking sessions were supported by delivery of fresh ingredients. Despite significant logistical factors, community organizations reported that these programs had a significant payoff for member satisfaction and enjoyment.
Participants identified many facilitators related to IT. The most salient factors described by participants was access to specialized IT expertise within the organization. This included informal knowledge among leadership team or membership, a volunteer, or less frequently, a dedicated IT staff member. One all-ages community organization notably re-purposed their crew of youth lifeguards who possessed technological skills as phone support for seniors. Volunteers’ willingness to assist activity facilitators with setting up their sessions and helping members join was regarded as more important than specific IT expertise. At other times, volunteers provided one-to-one training and support for members as they familiarized themselves with new IT devices and platforms. Community organizations reported supporting staff and volunteer activity facilitators by offering training in virtual program facilitation. In addition to learning how to navigate new IT platforms, training included a general introduction to best practices for delivering virtual sessions and how to manage interactions with program users during different types of activities (e.g., a conversation group vs. a lecture with Q&A). IT training for both facilitators and program users was greatly enhanced by finding or developing resources for common IT apps, platforms, and processes. In some cases, existing programs on using IT devices were also repurposed to support online activities. According to a couple community organizations, efforts to socialize members into virtual activities benefitted from a technological needs assessment to gauge ability, willingness, and interest to participate.
Our final theme describes how community organizations responded to the need to rapidly and completely move to virtual programming due to the pandemic. Their insights have implications for future programming. First, community organizations reported increased participation by underserved members in their local communities, which we describe as increased ‘depth’ of reach. Second, community organizations noted new connections with members and other organizations across an expanded geographic range, which we name increased ‘breadth’ of reach. A handful of organizations were confident in the advantages of virtual activities. We note that these changes iteratively informed community organizations’ decisions about virtual and hybrid activities. Increased ‘Depth’ of reach A significant number of community organizations described how the essential transition to virtual programming had enabled them to reach new members within their local community. In some cases, these older adults were homebound due to physical factors including reduced mobility, allergies, or compromised immune systems. In other instances, prospective members had faced transportation barriers related to public transportation, driving long distances, and parking. Community organizations reported that older adults who lived in residential care facilities reportedly found themselves newly isolated when in-person recreation activities at their facility were restricted, or if their facility offered limited/no activities for people with linguistically and culturally diverse needs (e.g., a resident who primarily spoke Japanese in a facility where programming was offered only in English). Many of these previously unreachable older adults, including those who live in residential care facilities, found themselves, with the arrival of virtual programming, able to connect with peers in new and rewarding ways through community organization programming. Organizations lauded this new reach and reported considering keeping such approaches post-pandemic. Increased ‘Breadth’ of reach Almost as many organizations discussed how the online transition had enabled them to attract new participation from outside their previous geographic catchment area. This allowed for collaborations with groups like art museums, musical performances, and individual facilitators of courses, from other areas. Virtual activities also drew participants from distant locations; for example, an Indigenous program user took part in online drum circles from their location in Mexico, and program users who previously only participated in summer activities in B.C. before relocating for the Fall and Winter. Whether or not these members continued their patterns of travel during the pandemic, the ability to take part in programs remotely allowed many program users to participate during new times and in new ways. This new reach was appealing for community groups. Themes are summarized in Fig. .
A significant number of community organizations described how the essential transition to virtual programming had enabled them to reach new members within their local community. In some cases, these older adults were homebound due to physical factors including reduced mobility, allergies, or compromised immune systems. In other instances, prospective members had faced transportation barriers related to public transportation, driving long distances, and parking. Community organizations reported that older adults who lived in residential care facilities reportedly found themselves newly isolated when in-person recreation activities at their facility were restricted, or if their facility offered limited/no activities for people with linguistically and culturally diverse needs (e.g., a resident who primarily spoke Japanese in a facility where programming was offered only in English). Many of these previously unreachable older adults, including those who live in residential care facilities, found themselves, with the arrival of virtual programming, able to connect with peers in new and rewarding ways through community organization programming. Organizations lauded this new reach and reported considering keeping such approaches post-pandemic.
Almost as many organizations discussed how the online transition had enabled them to attract new participation from outside their previous geographic catchment area. This allowed for collaborations with groups like art museums, musical performances, and individual facilitators of courses, from other areas. Virtual activities also drew participants from distant locations; for example, an Indigenous program user took part in online drum circles from their location in Mexico, and program users who previously only participated in summer activities in B.C. before relocating for the Fall and Winter. Whether or not these members continued their patterns of travel during the pandemic, the ability to take part in programs remotely allowed many program users to participate during new times and in new ways. This new reach was appealing for community groups. Themes are summarized in Fig. .
Through this study we interviewed 26 individuals representing older adult-serving community organizations to understand their experiences of adapting their services during the COVID-19 pandemic. We included organizations from across B.C. serving older adults in rural, remote, and urban communities. Our most notable finding is the influence of resources as facilitators and barriers at both the organizational and individual level, commonly related to human resources and capacity to receive and provide IT support. Organizations with capacity to re-purpose staff (i.e. lifeguards with technical skills) were able to pivot their services to support the technological needs of older adults in their communities, whereas organizations that relied largely on volunteers struggled to adequately support their members. Despite the challenges, organizations lauded the benefits of increased reach in their membership activities facilitated by increased use of IT. Findings related to the capacity of organizations to meet older adults’ needs have been reported elsewhere . Somerville, Coyle and Mutchler surveyed councils on ageing and senior centers in Massachusetts to understand adaptations made during the pandemic. Most of these organizations reported being very well prepared or somewhat prepared: most were still functional, but programs were limited to essential services, prioritizing nutritional and social needs. Results showed centers offering more programs were more likely to use both digital and traditional means than just either. This study also found challenges with IT, loss of staff, and adjusting to working remotely, which align with the findings in our study. An important finding from this study relates to the value of community organizations that worked collaboratively to overcome pandemic-related barriers to increase their reach. Community organizations in BC found strength in numbers as they sought to address the needs of their combined members during the pandemic. Similarly, a recent study of the National Association for Area Agencies on Aging in the USA showed that the majority of responding centers were serving more clients after the pandemic and with increased demand for aging services. This was due to transitions in ways of delivering services and the need to build new partnerships with community organizations such as healthcare entities. These partnerships helped mobilize services for the growing demands before additional funding was available. Along with service delivery, community organizations can play an important role in distribution/dissemination of information in a time like the COVID-19 pandemic due to the established trust with their members. Weinberger-Litman et al. studied a Jewish community who was the first community to be quarantined in the USA. The authors found that participants had greater trust in community organizations (religious institutions) compared to governmental and media sources of COVID-19 related information. This demonstrates the importance of close collaboration of health agencies with such communities to facilitate and ensure relay of reliable information. This is a salient finding as we combat the misinformation that accompanied the pandemic. There were mixed findings about whether gender-specific programs were of more interest to men or women- however we note that gender differences for supports and coping existed before the pandemic [ – ]. One group described challenges engaging men in virtual activities which is consistent with prior research finding that men were underrepresented in community center use , whereas another organization reported the opposite. Although we did not find a gender analysis regarding transitions to online services published elsewhere, we note that Campos-Castillo found that men are usually more likely than women to search for information but are less likely than women to share information. While our research is not conclusive related to the gendered implications of the pivot to online supports, it does warrant future consideration. Limitations This analysis is subject to several limitations. We identified a large number of older adult-focused groups across the province, and of the groups contacted, a relatively low proportion had staff willing to be interviewed. Some declined to participate, citing limited resources and an overwhelming number of research-related requests. Others did not respond at all. As described in the methods section, interviews were not recorded. While not without limitations, this approach has been appraised as a suitable method for experienced qualitative researchers when conducting thematic analysis . We found this approach to be effective because participants were able to confirm the accuracy of the notes, which also became a resource for them. In an email communication, one participant noted the value of this systematic overview of their organization’s activities as a reference. We did not collect data on the size of the organizations, but most had few staff or were run entirely by volunteers, who may be older adults themselves. Of note, we were contacting the organizations between December 2020 and May 2021, a period of elevated COVID-19 case counts across B.C. and before and in the first days of the province’s vaccine rollout. We also recognize that our findings are a description of the experiences of local organizations with unique services and clients; the goal of our study was not generalizability. Further research in how community organizations supported older adults with IT during the COVID-19 pandemic can help inform policy and practice regarding older adults’ care and wellbeing beyond the pandemic.
This analysis is subject to several limitations. We identified a large number of older adult-focused groups across the province, and of the groups contacted, a relatively low proportion had staff willing to be interviewed. Some declined to participate, citing limited resources and an overwhelming number of research-related requests. Others did not respond at all. As described in the methods section, interviews were not recorded. While not without limitations, this approach has been appraised as a suitable method for experienced qualitative researchers when conducting thematic analysis . We found this approach to be effective because participants were able to confirm the accuracy of the notes, which also became a resource for them. In an email communication, one participant noted the value of this systematic overview of their organization’s activities as a reference. We did not collect data on the size of the organizations, but most had few staff or were run entirely by volunteers, who may be older adults themselves. Of note, we were contacting the organizations between December 2020 and May 2021, a period of elevated COVID-19 case counts across B.C. and before and in the first days of the province’s vaccine rollout. We also recognize that our findings are a description of the experiences of local organizations with unique services and clients; the goal of our study was not generalizability. Further research in how community organizations supported older adults with IT during the COVID-19 pandemic can help inform policy and practice regarding older adults’ care and wellbeing beyond the pandemic.
This study is important because it describes how community organizations leveraged available resources to maintain social interaction for older adults when physical distancing was required during the pandemic. The reliance on IT-based remote methods allowed community organizations to increase the depth of engagement with older adults, which facilitated engagement with linguistically and culturally diverse groups. Virtual methods also reduced travel burden and facilitated community organizations’ breadth of engagement. Nevertheless, this study documents significant barriers to IT access for older adults and community-based groups that serve them, which has implications for social interaction beyond the pandemic and for engagement in remote healthcare that continues to rely heavily on IT. Government investment in IT is needed – not only for community organizations, but also for older adults who may face sociopolitical barriers. Lack of access to or comfort with IT made a subgroup of older adults vulnerable to isolation during the pandemic, and underscores how IT use for older adults should be a target for policy and social services.
Supplementary Material 1
|
Digital health literacy and sociodemographic factors among students in western Iran: a cross-sectional study | 1018bd74-5f73-41ae-a88e-68312399e1c9 | 11806557 | Health Literacy[mh] | Today, health literacy is recognized as a significant public health issue that plays an essential role in improving health equity . According to the World Health Organization (WHO), health literacy refers to “the personal characteristics and social resources needed by individuals and communities to access, understand, evaluate, and use information and services for health-related decision-making” . With the advancement of technology, the sources for obtaining health-related information have shifted from traditional media (radio, television, magazines, bestselling books, etc.) to digital media . The use of health information technology has given rise to the concept of digital health literacy (DHL) . Digital health literacy (DHL), also known as electronic health literacy, involves using the internet to access, understand, and evaluate health-related information to address health issues . Digital health literacy is an important and evolving concept that can lead to optimistic transformations in health outcomes . It encompasses unique skills, including traditional literacy, health literacy, information literacy, scientific literacy, media literacy, and computer literacy, to navigate health-related care in the age of technology . Moreover, due to its role in optimizing the health of individuals, digital health literacy is crucial in reducing health inequalities on a larger scale . Individuals who use the Internet and have more digital skills may be more motivated to utilize health-related resources and digital health services . According to international studies, inadequate digital health literacy has been shown to reduce the use of healthcare services, lower the ability to make health-related decisions, and increase the likelihood of poorer health outcomes overall . Based on the study by Cheng et al., individuals with higher digital health literacy are more competent in searching for and finding suitable, reliable, and health-related information compared to those with lower digital health literacy . The academic community widely relies on the Internet for access to scientific and medical websites as well as national and international databases, making them dependent on Internet resources . Students represent a significant portion of the population and are expected to possess a high level of knowledge about health issues . Digital health literacy empowers students to utilize emerging technologies, enhancing the quality of healthcare delivery . With the digitization of medical information, students need the ability to evaluate and differentiate between inaccurate and efficient information in order to apply information effectively . Research on students’ digital health literacy remains limited. According to O’Doherty et al. ،, students demonstrated high proficiency in searching for information online and engaging in social programs on digital platforms. In the study by Rathnayake et al. , nearly half of the students (49.4%) had inadequate digital health literacy skills. Another study reported that medical students’ digital health literacy skills were also poor (53.2%) . Without sufficient digital health literacy, accessing a large volume of information can lead to confusion . The abundance of information generated through the internet, which includes inaccurate health data, can interfere with an individual’s ability to make informed health decisions . The results of other studies point to the existence of a digital divide, indicating that sociodemographic factors can affect individuals’ access to the internet for health-related information searches. According to the Model of eHealth Use Integrative , demographic characteristics (such as education, age, gender, income, and internet usage features like having a personal electronic device) influence digital health literacy. Based on this model, social structural inequalities contribute to healthcare disparities through digital health literacy . Understanding the relationship between digital health literacy and sociodemographic factors may help evaluate and implement digital interventions, ultimately reducing health disparities . The results of studies by Estrela et al. , Shi , and Lwin et al. suggest that factors such as income level, education, age, gender, and marital status are associated with digital health literacy. According to the findings of De Santis et al. and Choi et al. , younger individuals with higher education levels have better health-related digital health literacy. Conversely, adults with lower education levels may face comprehension barriers when searching for health information . Another study reported that men had lower digital health literacy scores than women, however studies by Tran et al. and Huang et al. , showed that male students had higher digital health literacy scores than female students. Additionally, previous studies have shown that higher digital health literacy scores were associated with greater income levels. In one study, occupation and marital status significantly impacted digital health literacy . Given that a large number of internet users are students, and there is a higher probability of students being youths, concerns were raised regarding the physical, mental, and social health of the next generation in the country . Digital health literacy is crucial, serving as a significant step toward empowering individuals, youth, and students to manage their health and make autonomous health-related decisions. Therefore, research on digital health literacy is necessary to understand students’ adaptation to digital technology and their use of digital healthcare resources . Accordingly, the present study aimed to assess the digital health literacy level as well as sociodemographic factors of students of universities in Asadabad County, Hamadan, Western Iran. The findings from this study will help develop educational strategies and interventions to enhance students’ digital health literacy.
Study design This research was a descriptive-cross-sectional study conducted between May to June 2024 among all students from universities in Asadabad county [comprising Islamic Azad University (550 students), Payame Noor University (300 students), Asadabad Technical and Vocational College (300 students), and Asadabad School of Medical Sciences (250 students)]. Samples To determine the sample size, the formula for estimating the mean ( [12pt]{minimal}
$$\:n={S}^{2}{Z}^{2}/{d}^{2}$$ ) was used, where S is the standard deviation of the digital health literacy score, which was 5.02 in the study by Turan et al. . The square of the 95th percentile of the normal distribution is 3.84, and d represents the margin of error, or the difference considered for estimating the mean, which is 0.44. After substituting these values into the formula, the sample size for this study was calculated to be 500 participants. Data was collected from 500 students enrolled in associate, bachelor’s, master’s, and higher degree programs in the universities of Asadabad county using stratified random sampling proportional to the population size. In this way, each of the universities was considered as a stratum, and then several students (proportionate to the number of students of that university) were randomly selected from each stratum (university). Of these, 198 students (39.6%) were from Islamic Azad University, 104 students (20.8%) from Payame Noor University Asadabad, 108 students (21.6%) from Technical and Vocational College, and 90 students (18%) from Asadabad School of Medical Sciences. Students at each university were selected randomly. This was done by visiting the education office of each college, obtaining a list of students, and then randomly selecting a specific number of individuals. These students were contacted and invited to participate in the study. If a student was unavailable or unwilling to participate, another individual was randomly chosen as a replacement. Inclusion criteria included: being enrolled in one of the universities or colleges in Asadabad county at the time of the study and consent to participate in the study. The exclusion criterion was incomplete questionnaires. Measures Demographic information The demographic information of the students included university, gender, education level, marital status, nativity status, residence, duration of computer use, and satisfaction with financial status. Digital health literacy instrument (DHL) A pre-designed standard questionnaire by Van Der Vaart and Drossaert (2017) was used to assess DHL for this study . This questionnaire is designed to evaluate DHL and has previously been validated in various populations and countries . The questionnaire comprises 21 questions and seven subscales, each with three items. The subscales are: Operational skills Using a computer keyboard or mouse, using buttons or links and hyperlinks on websites. Navigation skills Losing track on a website or the internet, knowing how to return to the previous page, clicking on something and seeing something different from what was expected, Information search Choosing from all the information found, using appropriate words or search phrases to find the desired information, finding the precise information needed, Evaluating reliability Deciding whether the information is reliable, deciding whether the information is written with commercial interests, checking different websites to see if they provide the same information, Determining data relevancy Deciding on the usefulness of the information found, using the information found in daily life, using the information found to make health decisions, Adding content Clearly formulating a health-related question or concern, expressing opinions, thoughts, or feelings in writing, writing a message, and Protecting privacy Judging who can share private information with reading, sharing others’ private information. This questionnaire has 21 questions and seven domains (each with three questions) including Operational skills, determining data relevancy, evaluating data reliability, Information searching, adding content, protecting privacy and Navigation skills. Items related to the five areas of operational skills, establishing relevance, assessing reliability, searching for information, adding content by a 4-point Likert scale (from “very difficult” = 1 to “very easy” = 4), and items related to two The extent of privacy protection and orientation skills were scored on a 4-point scale (from “never” = 1 to “always” = 4) . Finally, the grades of all areas and the overall grade were transferred to the range of zero to 100 and analyzed. Skills are rated as very undesirable for an average of less than 20.0% of the total score, undesirable between 21.0% and 40.0%, intermediate between 41.0% and 60.0%, desirable between 61.0% and 80.0%, and very desirable between 81.0% and 100.0% . In the study by Van Der Vaart and Drossaert, the DHL tool showed a Cronbach’s alpha of 0.87, indicating acceptable reliability . Additionally, in the study by Alipour et al. among healthcare workers in teaching hospitals in southeast Iran, the validity and reliability of this questionnaire were achieved with a Cronbach’s alpha coefficient of 0.98 for the overall scale . A pilot study involving 25 individuals who met the study’s inclusion criteria further confirmed all questionnaire sections’ clarity and applicability. In addition, internal consistency was assessed using Cronbach’s alpha for Operational skill, determining data relevancy, evaluating data reliability, Information searching, adding content, protecting privacy and Navigation skills, yielding satisfactory values of 0.91, 0.89, 0.82, 0.85, 0.85, 0.71, and 0.73, the for Scales of DHL questionnaire, respectively. Also, for the entire questionnaire, Cronbach’s alpha value of 0.92 was obtained. Data collection After approval from the ethics committee and obtaining permission from the university’s research vice-chancellor, coordination with the selected universities was carried out. The researchers introduced themselves and obtained consent from the research units to participate in the study. The study’s objectives were explained to the samples. They were included in the study if they met all the inclusion criteria and provided written informed consent. According to the introductory explanation of the questionnaire, participation in the study was voluntary, and students could withdraw from the study at any time without completing the questionnaire. Additionally, the researchers explained the anonymity and confidentiality of the questionnaires and requested that the research units accurately answer all questions. Data analysis After data collection, SPSS24 software was used for data analysis. Descriptive statistics, including frequency, standard deviation, mean, and percentage, were used to describe the demographic characteristics of the samples. The normality assumption for all variables was examined using the Kolmogorov-Smirnov test and skewness and kurtosis indices, the variables whose indices were in the range of -1 to 1 were considered as normal. Independent t-tests and one-way ANOVA were then used to compare the mean scores of various dimensions of digital health literacy across different qualitative variables. The impact of various variables on the digital health literacy score was also assessed using multiple linear regression models. A significance level of less than 0.05 was considered for this study. Ethical consideration This study was approved and adhered to by the ethics committee of Asadabad School of Medical Sciences with the ethical code (IR.ASAUMS.REC.1403.009). Oral and written consent was obtained from samples based on the recommendations approved by the ethics committee. Samples were allowed to withdraw from the study at any time if they wished. Additionally, all samples were involved in the research process, and their information was kept confidential.
This research was a descriptive-cross-sectional study conducted between May to June 2024 among all students from universities in Asadabad county [comprising Islamic Azad University (550 students), Payame Noor University (300 students), Asadabad Technical and Vocational College (300 students), and Asadabad School of Medical Sciences (250 students)].
To determine the sample size, the formula for estimating the mean ( [12pt]{minimal}
$$\:n={S}^{2}{Z}^{2}/{d}^{2}$$ ) was used, where S is the standard deviation of the digital health literacy score, which was 5.02 in the study by Turan et al. . The square of the 95th percentile of the normal distribution is 3.84, and d represents the margin of error, or the difference considered for estimating the mean, which is 0.44. After substituting these values into the formula, the sample size for this study was calculated to be 500 participants. Data was collected from 500 students enrolled in associate, bachelor’s, master’s, and higher degree programs in the universities of Asadabad county using stratified random sampling proportional to the population size. In this way, each of the universities was considered as a stratum, and then several students (proportionate to the number of students of that university) were randomly selected from each stratum (university). Of these, 198 students (39.6%) were from Islamic Azad University, 104 students (20.8%) from Payame Noor University Asadabad, 108 students (21.6%) from Technical and Vocational College, and 90 students (18%) from Asadabad School of Medical Sciences. Students at each university were selected randomly. This was done by visiting the education office of each college, obtaining a list of students, and then randomly selecting a specific number of individuals. These students were contacted and invited to participate in the study. If a student was unavailable or unwilling to participate, another individual was randomly chosen as a replacement. Inclusion criteria included: being enrolled in one of the universities or colleges in Asadabad county at the time of the study and consent to participate in the study. The exclusion criterion was incomplete questionnaires.
Demographic information The demographic information of the students included university, gender, education level, marital status, nativity status, residence, duration of computer use, and satisfaction with financial status. Digital health literacy instrument (DHL) A pre-designed standard questionnaire by Van Der Vaart and Drossaert (2017) was used to assess DHL for this study . This questionnaire is designed to evaluate DHL and has previously been validated in various populations and countries . The questionnaire comprises 21 questions and seven subscales, each with three items. The subscales are: Operational skills Using a computer keyboard or mouse, using buttons or links and hyperlinks on websites. Navigation skills Losing track on a website or the internet, knowing how to return to the previous page, clicking on something and seeing something different from what was expected, Information search Choosing from all the information found, using appropriate words or search phrases to find the desired information, finding the precise information needed, Evaluating reliability Deciding whether the information is reliable, deciding whether the information is written with commercial interests, checking different websites to see if they provide the same information, Determining data relevancy Deciding on the usefulness of the information found, using the information found in daily life, using the information found to make health decisions, Adding content Clearly formulating a health-related question or concern, expressing opinions, thoughts, or feelings in writing, writing a message, and Protecting privacy Judging who can share private information with reading, sharing others’ private information. This questionnaire has 21 questions and seven domains (each with three questions) including Operational skills, determining data relevancy, evaluating data reliability, Information searching, adding content, protecting privacy and Navigation skills. Items related to the five areas of operational skills, establishing relevance, assessing reliability, searching for information, adding content by a 4-point Likert scale (from “very difficult” = 1 to “very easy” = 4), and items related to two The extent of privacy protection and orientation skills were scored on a 4-point scale (from “never” = 1 to “always” = 4) . Finally, the grades of all areas and the overall grade were transferred to the range of zero to 100 and analyzed. Skills are rated as very undesirable for an average of less than 20.0% of the total score, undesirable between 21.0% and 40.0%, intermediate between 41.0% and 60.0%, desirable between 61.0% and 80.0%, and very desirable between 81.0% and 100.0% . In the study by Van Der Vaart and Drossaert, the DHL tool showed a Cronbach’s alpha of 0.87, indicating acceptable reliability . Additionally, in the study by Alipour et al. among healthcare workers in teaching hospitals in southeast Iran, the validity and reliability of this questionnaire were achieved with a Cronbach’s alpha coefficient of 0.98 for the overall scale . A pilot study involving 25 individuals who met the study’s inclusion criteria further confirmed all questionnaire sections’ clarity and applicability. In addition, internal consistency was assessed using Cronbach’s alpha for Operational skill, determining data relevancy, evaluating data reliability, Information searching, adding content, protecting privacy and Navigation skills, yielding satisfactory values of 0.91, 0.89, 0.82, 0.85, 0.85, 0.71, and 0.73, the for Scales of DHL questionnaire, respectively. Also, for the entire questionnaire, Cronbach’s alpha value of 0.92 was obtained.
The demographic information of the students included university, gender, education level, marital status, nativity status, residence, duration of computer use, and satisfaction with financial status.
A pre-designed standard questionnaire by Van Der Vaart and Drossaert (2017) was used to assess DHL for this study . This questionnaire is designed to evaluate DHL and has previously been validated in various populations and countries . The questionnaire comprises 21 questions and seven subscales, each with three items. The subscales are:
Using a computer keyboard or mouse, using buttons or links and hyperlinks on websites.
Losing track on a website or the internet, knowing how to return to the previous page, clicking on something and seeing something different from what was expected,
Choosing from all the information found, using appropriate words or search phrases to find the desired information, finding the precise information needed,
Deciding whether the information is reliable, deciding whether the information is written with commercial interests, checking different websites to see if they provide the same information,
Deciding on the usefulness of the information found, using the information found in daily life, using the information found to make health decisions,
Clearly formulating a health-related question or concern, expressing opinions, thoughts, or feelings in writing, writing a message, and
Judging who can share private information with reading, sharing others’ private information. This questionnaire has 21 questions and seven domains (each with three questions) including Operational skills, determining data relevancy, evaluating data reliability, Information searching, adding content, protecting privacy and Navigation skills. Items related to the five areas of operational skills, establishing relevance, assessing reliability, searching for information, adding content by a 4-point Likert scale (from “very difficult” = 1 to “very easy” = 4), and items related to two The extent of privacy protection and orientation skills were scored on a 4-point scale (from “never” = 1 to “always” = 4) . Finally, the grades of all areas and the overall grade were transferred to the range of zero to 100 and analyzed. Skills are rated as very undesirable for an average of less than 20.0% of the total score, undesirable between 21.0% and 40.0%, intermediate between 41.0% and 60.0%, desirable between 61.0% and 80.0%, and very desirable between 81.0% and 100.0% . In the study by Van Der Vaart and Drossaert, the DHL tool showed a Cronbach’s alpha of 0.87, indicating acceptable reliability . Additionally, in the study by Alipour et al. among healthcare workers in teaching hospitals in southeast Iran, the validity and reliability of this questionnaire were achieved with a Cronbach’s alpha coefficient of 0.98 for the overall scale . A pilot study involving 25 individuals who met the study’s inclusion criteria further confirmed all questionnaire sections’ clarity and applicability. In addition, internal consistency was assessed using Cronbach’s alpha for Operational skill, determining data relevancy, evaluating data reliability, Information searching, adding content, protecting privacy and Navigation skills, yielding satisfactory values of 0.91, 0.89, 0.82, 0.85, 0.85, 0.71, and 0.73, the for Scales of DHL questionnaire, respectively. Also, for the entire questionnaire, Cronbach’s alpha value of 0.92 was obtained.
After approval from the ethics committee and obtaining permission from the university’s research vice-chancellor, coordination with the selected universities was carried out. The researchers introduced themselves and obtained consent from the research units to participate in the study. The study’s objectives were explained to the samples. They were included in the study if they met all the inclusion criteria and provided written informed consent. According to the introductory explanation of the questionnaire, participation in the study was voluntary, and students could withdraw from the study at any time without completing the questionnaire. Additionally, the researchers explained the anonymity and confidentiality of the questionnaires and requested that the research units accurately answer all questions.
After data collection, SPSS24 software was used for data analysis. Descriptive statistics, including frequency, standard deviation, mean, and percentage, were used to describe the demographic characteristics of the samples. The normality assumption for all variables was examined using the Kolmogorov-Smirnov test and skewness and kurtosis indices, the variables whose indices were in the range of -1 to 1 were considered as normal. Independent t-tests and one-way ANOVA were then used to compare the mean scores of various dimensions of digital health literacy across different qualitative variables. The impact of various variables on the digital health literacy score was also assessed using multiple linear regression models. A significance level of less than 0.05 was considered for this study.
This study was approved and adhered to by the ethics committee of Asadabad School of Medical Sciences with the ethical code (IR.ASAUMS.REC.1403.009). Oral and written consent was obtained from samples based on the recommendations approved by the ethics committee. Samples were allowed to withdraw from the study at any time if they wished. Additionally, all samples were involved in the research process, and their information was kept confidential.
Demographics In this study, 500 students from four universities in the city of Asadabad participated. The majority of the students were female (305 students, 61%), and most of the students (245 students, 49.00%) were enrolled in associate’s degree programs. 380 students (76%) were single, and 280 students (56%) were native to Asadabad. The frequency distribution of the samples based on various variables such as university, gender, level of education, marital status, native status, place of residence, Duration of computer use (hours), and satisfaction with financial status is reported in Table . According to the Kolmogorov-Smirnov test and the skewness and kurtosis indices, all variables (except age and the “protecting privacy” dimension) had a normal distribution. The Cronbach’s alpha coefficient was also calculated and reported for the overall score and all dimensions of the digital health literacy questionnaire. Table presents the minimum, maximum, mean, standard deviation, skewness, kurtosis, and Cronbach’s alpha for all the quantitative variables. According to the questionnaire instructions, the digital health literacy scores and their various dimensions were calculated by summing the relevant questions. All the scores were then transformed to a 0-100 scale for analysis. Scores less than 20 were considered “very desirable “, 21–40 as " undesirable “, 41–60 as " moderate “, 61–80 as “good”, and 81–100 as “very desirable” digital literacy. In Fig. , the percentage of individuals in each level of digital health literacy (from undesirable to desirable) is shown. In Table , the correlation between age and the questionnaire variables (total score and its dimensions) is reported. According to the study results, the correlation between age and navigation skill is positive and significant ( r = 0.118, p < 0.001), indicating that, on average, navigation skill increases with age. However, the correlation between age and other questionnaire variables (except protecting privacy) is negative and significant ( p < 0.001), suggesting that these variables, on average, decrease as age increases. Given the normality of the digital health literacy variable, independent t-tests and one-way ANOVA were used to compare the mean digital health literacy scores of students across different levels of qualitative variables. The results are reported in Table . According to these tests, the digital health literacy score was significantly associated with the variables of university, gender, level of education, native status, place of residence, and satisfaction with financial status. The digital health literacy score was significantly higher in female students compared to males ( P = 0.049), and in non-native students compared to native students ( P = 0.001). Then, Tukey’s post hoc test was used to compare the mean digital literacy score on different levels of qualitative variables under two conditions. According to the results of this test, the mean digital health literacy score was significantly higher among students from the technical and vocational universities of three universities: Payam Noor ( P = 0.041), medical sciences ( P = 0.048) and Azad ( P < 0.001). There were significantly more at Payam Noor ( P < 0.001) and medical sciences ( P < 0.001) universities than at Azad University. The mean digital health literacy score was significantly higher for associate students than for master’s students and higher ( P = 0.023). The mean digital health literacy score among students living in dormitories is significantly higher than that of students living in private houses ( P < 0.001) and rented houses ( P < 0.001), and this mean for students living in personal houses is significantly higher than students who lived in rental houses ( P < 0.001). With the increase in satisfaction with the financial status, the mean score of digital health literacy has also increased, that this means is significantly higher among students with completely sufficient levels of satisfaction than among students with less sufficient levels of satisfaction ( P = 0.028) and insufficient ( P < 0.001). This mean is significantly higher for students with sufficient levels of satisfaction than for students with less than sufficient ( P = 0.027) and insufficient ( P < 0.001) levels of satisfaction, and the mean for students with less than sufficient levels of satisfaction is also above the value Students with insufficient satisfaction. ( P < 0.001) (Table ). Finally, multiple linear regression was used to examine the simultaneous effect of various variables on the overall digital health literacy score. All variables were initially entered into the model, and then the Backward Selection method was used to remove variables that were not statistically significant. The final model showed that age and satisfaction with financial status were significantly associated with digital health literacy scores. For every one-year increase in age, the digital health literacy score decreased by -0.54 units ( P < 0.001). Additionally, as dissatisfaction with financial status increased (from completely sufficient to sufficient to less than sufficient to insufficient), the digital health literacy score decreased by a moderate of 9.12 units ( P < 0.001) (Table ).
In this study, 500 students from four universities in the city of Asadabad participated. The majority of the students were female (305 students, 61%), and most of the students (245 students, 49.00%) were enrolled in associate’s degree programs. 380 students (76%) were single, and 280 students (56%) were native to Asadabad. The frequency distribution of the samples based on various variables such as university, gender, level of education, marital status, native status, place of residence, Duration of computer use (hours), and satisfaction with financial status is reported in Table . According to the Kolmogorov-Smirnov test and the skewness and kurtosis indices, all variables (except age and the “protecting privacy” dimension) had a normal distribution. The Cronbach’s alpha coefficient was also calculated and reported for the overall score and all dimensions of the digital health literacy questionnaire. Table presents the minimum, maximum, mean, standard deviation, skewness, kurtosis, and Cronbach’s alpha for all the quantitative variables. According to the questionnaire instructions, the digital health literacy scores and their various dimensions were calculated by summing the relevant questions. All the scores were then transformed to a 0-100 scale for analysis. Scores less than 20 were considered “very desirable “, 21–40 as " undesirable “, 41–60 as " moderate “, 61–80 as “good”, and 81–100 as “very desirable” digital literacy. In Fig. , the percentage of individuals in each level of digital health literacy (from undesirable to desirable) is shown. In Table , the correlation between age and the questionnaire variables (total score and its dimensions) is reported. According to the study results, the correlation between age and navigation skill is positive and significant ( r = 0.118, p < 0.001), indicating that, on average, navigation skill increases with age. However, the correlation between age and other questionnaire variables (except protecting privacy) is negative and significant ( p < 0.001), suggesting that these variables, on average, decrease as age increases. Given the normality of the digital health literacy variable, independent t-tests and one-way ANOVA were used to compare the mean digital health literacy scores of students across different levels of qualitative variables. The results are reported in Table . According to these tests, the digital health literacy score was significantly associated with the variables of university, gender, level of education, native status, place of residence, and satisfaction with financial status. The digital health literacy score was significantly higher in female students compared to males ( P = 0.049), and in non-native students compared to native students ( P = 0.001). Then, Tukey’s post hoc test was used to compare the mean digital literacy score on different levels of qualitative variables under two conditions. According to the results of this test, the mean digital health literacy score was significantly higher among students from the technical and vocational universities of three universities: Payam Noor ( P = 0.041), medical sciences ( P = 0.048) and Azad ( P < 0.001). There were significantly more at Payam Noor ( P < 0.001) and medical sciences ( P < 0.001) universities than at Azad University. The mean digital health literacy score was significantly higher for associate students than for master’s students and higher ( P = 0.023). The mean digital health literacy score among students living in dormitories is significantly higher than that of students living in private houses ( P < 0.001) and rented houses ( P < 0.001), and this mean for students living in personal houses is significantly higher than students who lived in rental houses ( P < 0.001). With the increase in satisfaction with the financial status, the mean score of digital health literacy has also increased, that this means is significantly higher among students with completely sufficient levels of satisfaction than among students with less sufficient levels of satisfaction ( P = 0.028) and insufficient ( P < 0.001). This mean is significantly higher for students with sufficient levels of satisfaction than for students with less than sufficient ( P = 0.027) and insufficient ( P < 0.001) levels of satisfaction, and the mean for students with less than sufficient levels of satisfaction is also above the value Students with insufficient satisfaction. ( P < 0.001) (Table ). Finally, multiple linear regression was used to examine the simultaneous effect of various variables on the overall digital health literacy score. All variables were initially entered into the model, and then the Backward Selection method was used to remove variables that were not statistically significant. The final model showed that age and satisfaction with financial status were significantly associated with digital health literacy scores. For every one-year increase in age, the digital health literacy score decreased by -0.54 units ( P < 0.001). Additionally, as dissatisfaction with financial status increased (from completely sufficient to sufficient to less than sufficient to insufficient), the digital health literacy score decreased by a moderate of 9.12 units ( P < 0.001) (Table ).
This study aimed to assess the digital Health literacy in students and the associated factors. Therefore, due to the increase in false information and news that negatively impact disease prevention, understanding students’ DHL levels and related factors is essential for health policymakers and decision-makers as well as for public health interventions. The results of this study indicate that students’ digital health literacy is moderate level, which is consistent with the results of the study by Tubaishat et al., Tsukahara et al., and Tanaka et al. . Additionally, it is in line with previous studies conducted in Pakistan, France, and where students’ health literacy levels were relatively low and moderate . In the Rivadeneira et al. study , more than half of students had sufficient health literacy, in Germany 49.9% , in Pakistan 54.3% , and in the United States only 49% of students had sufficient digital health literacy. In a similar study in Iran, the score of digital health literacy among health workers was higher than in the present study , which was not consistent with the results of this study. The status of digital health literacy depends on socioeconomic factors (e.g. culture, environmental factors, income, etc.) . There are no websites like MedlinePlus (American National Library of Medicine) in Iran, on the other hand, the lack of trust of Iranian internet users and the dependence of Iranian students on unreliable sources has also caused the value of digital health literacy to decrease . Therefore, depending on the needs and usefulness of current information, increasing the level of digital health literacy of students, health authorities and policymakers taking measures to use online health information sources and providing health-related information on social media are essential. Based on the results of this study, the level of digital health literacy among students was desirable in terms of “operational skills”. Considering that the first step in accessing health information is using computers and internet browsers, operational skills play an important role in enhancing individuals’ digital health literacy . According to the study by Shudayfat and colleagues in Jordan and the study by Alipour and colleagues in Zahedan, respondents reported very desirable operational skills . In the study by Farooq and colleagues, 83% of students received a high score in digital health literacy in the area of operational skills . Other studies have shown that university students do not have the necessary skills to search for health information on the Internet. This highlights the importance of providing students with the skills necessary to evaluate health information. People who can use computers and the Internet is better able to search for resources correctly, use them correctly and identify the right resources, which has a positive impact on the level of digital health literacy and health decisions of the people affected students. However, further studies are needed to obtain more accurate results. Based on the results of our study, the privacy protection category was the most challenging, so it has the lowest score among the subscales and students’ ability to maintain privacy when sharing health information is unfavourable. In the Aydınlar et al. study students said that they feel helpless in the face of the laws adopted by the organization to protect personal data and that the fact that students spend more time on the Internet and use information technologies more often may make them more vulnerable to cyber threats than other people . Additionally, a study on web-based data protection in the German population showed that 72% of samples have doubts about the security of their data shared online and lack control over what happens to their web-based data . Feeling secure in the digital world, especially when searching for health-related information, is a vital issue . Therefore, suggested that in education, young people, especially female students, should be involved in security and privacy awareness programs and be aware of the use of effective passwords to protect their websites . The results of the study showed that the ability of participants in the category “navigation skill” or correct orientation on websites to find suitable information is at an unfavorable level. The ability to navigate properly is a necessary skill and is influenced by individuals’ skills and the complexity of health information systems. This result is consistent with Zhao et al.‘s study, where respondents reported the lowest scores in the area of information-seeking skills . In Farooq et al.‘s study, students’ navigation skills were at a desirable level, which did not align with our study results . Because the students in this study had poor levels of health information navigation skills, they do not have good potential to improve self-care. Therefore, it is recommended that curriculum planners be aware of students’ navigation skills and design a program according to their needs. On the other hand, it is necessary to comprehensively integrate the topics related to digital health into their training so that they can better deal with digital health tools. Students participating in this study had a lower chance of achieving a sufficient level of digital health literacy in the “determining data relevancy” aspect. Determining data relevancy refers to the utility of data in clinical settings . We believe this is an interesting finding that suggests that as the amount of information increases, there may be challenges for individuals to find and apply appropriate information. As the amount of information increases, it may lead to challenges for individuals in finding and utilizing appropriate information. Students in the study by Rosário et al. also had moderate levels of digital health literacy in the data linkage aspect . Desirable results in the data linkage scale were reported in the study by Shudayfat et al. and the study by Zakar et al., which were not consistent with the results of this study . Irrelevant health information can be costly and may waste people’s time, leading to errors in health planning . Therefore, it is proposed to pay more attention to the importance of training students in digital areas to obtain the necessary health information. Proper search plays an important role in obtaining accurate information as one of the dimensions of digital health literacy. In this study, students were in a moderate position in the " information searching " issue. In Nguyen et al.‘s study, information searching was associated with a moderate level of digital health literacy, which is consistent with the results of the present study . Other studies showed that university students do not have the necessary skills to search for health information on the Internet, highlighting the importance of the need for better education in Internet searching and health information retrieval skills for students via the Internet . Governments can also use popular social media (Telegram, YouTube, etc.) to integrate official health messaging . According to the results of our study, students scored moderately in the category of “Adding content”. This category examines the ability to formulate health-related questions, express opinions, thoughts and feelings in writing, and write messages in a comprehensible way that is understandable to the recipient . Shaabani et al. rated respondents’ attitudes toward sharing digital technology information with their audience as moderate, which is consistent with our results . Since young people can be confused when exposed to different media content, it is necessary to improve competencies such as skills, knowledge and attitudes towards media technologies. One of the topics discussed in this study was the " Evaluating data reliability " category on websites. In the present study, students had a moderate ability to identify which websites provided reliable information on health topics. This may be because in Iran it is not easy for the audience to access health information online, while in other countries this aspect of health information is emphasized and some associations and health organizations such as the Medical Library Association and the Medical Association of the United Kingdom have reputable ones Health websites introduced. The information on these websites is regularly evaluated by the Ministry of Health for medical and health-related websites. On the other hand, in Iran, the issue of evaluating health information on websites is not yet officially addressed and there are doubts about the accuracy of the information provided on these websites . According to the results of the study by Bak et al., for nearly one in three students, deciding whether the information is reliable, verified, and comes from official sources is difficult . A Slovenian study also showed that one out of every two students (49.3%) have problems judging the reliability of digital information . However, attention should be paid to students’ ability to select reliable sources of information and to correctly use the information obtained when making health-related decisions. Therefore, the need for educational interventions for students on how to validate health information available on the Internet and other digital technologies to improve health literacy is emphasized. Contrary to the results of other studies , there was a significant difference between women and men in digital health literacy, such that female students had higher scores in digital health literacy compared to male students. In the study of Park et al. and Salehi et al., a significant difference was found between the two genders regarding e-health literacy. In a country like Iran, compared to men, as a cultural expression, women have visited health centres and health professionals more and asked more questions about health issues, and men rarely visit doctors and prefer to seek other solutions. On the other hand, for cultural reasons, Iranian women are more likely to seek health information for both their children and their partners, as they play a role in the family . Further studies are needed to examine the impact of gender on students’ digital health. We also found that non-native students and those living in dormitories had higher digital health literacy scores compared to native students. Due to the research limitations in the sources reviewed, the possibility of aligning the results was not feasible. An alternative explanation for this result may be that students from different cultures have different attitudes towards digital health literacy . It is clear that the requirement of dormitory life and being away from family is the adaptation of students to different cultures and dormitory conditions, so they may use the internet more to cope with these conditions. However, our results should be interpreted with care. Based on the results of this study, the digital health literacy scores of students in technical and vocational universities were significantly higher than those of the three universities of Payam Noor, Medical Sciences, and Azad. In the study by Dastani et al., no difference was observed in the level of electronic health literacy among students of different schools . The reason for the inconsistency of the results may be the cultural differences between the studies, and the differences between the participants in terms of age and level of education. Regarding the relationship between education level and digital health literacy, students with a higher education level had a lower digital health literacy level. In Iran, Dastani et al. study showed moderate e-health literacy in master’s and doctoral students, which was consistent with the present study . We argue that in this study, associate degree students are significantly more exposed to web-based health information due to their younger age group compared to other degree programs, and their digital health literacy decreases with age. On the other hand, this problem can also be caused by their exposure to electronic resources as well as educational courses and information units that are included in their curriculum. Further research is needed to understand the relationship between educational level and digital health literacy . The association between digital health literacy and living in a dormitory was also significant. These results require careful consideration of cultural fit. Students living in dormitories may spend more time on the Internet due to being away from their families . Therefore, whether participants are native or non-native must be taken into account to assess the status of digital health literacy in multicultural environments. Additionally, based on multiple linear regression analysis, the association between age and satisfaction with financial status on digital health literacy was significant. The value of students’ digital health literacy decreased as they got older. Young people are one of the primary consumers of digital information and are also at the forefront of using social media to disseminate information, which impacts their health-related behaviours . According to the results of the study by Dolu et al.,, in Turkey, age was not a predictor of the level of e-health literacy . In the study by Dadaczynski et al., younger age groups were more influenced by web-based health information . Previous studies by Zhao et al., Cheng et al., and García-García et al. also found that age was negatively associated with digital health literacy scores . Previous studies by van Deursen et al. showed that older adults often have lower operational and navigational skills compared to younger individuals . With increasing age, individuals face more cognitive, sensory, and motor barriers and challenges in using technology for health information compared to younger individuals . Additionally, based on multiple linear regression analysis, the association between age and satisfaction with financial status with digital health literacy was significant. In this study, it appears that as the economic situation improved, students’ access to the Internet improved, which led to an improvement in Internet access quality, which may lead to better search and knowledge related to digital health literacy. The study by Rivadeneira in Spain and the Svendsen et al. study showed that unfavourable economic and social status was associated with lower digital health literacy. Policymakers in universities and government should focus on reducing socioeconomic inequalities and identifying the role of cultural factors on digital health literacy. One of the strengths of this study, as mentioned above, in addition to identifying the level of digital health literacy and the factors influencing it, was the use of a valid digital health literacy questionnaire, which provided an opportunity to compare the digital health literacy of Iranian university students with other countries. In the present study, this instrument showed alpha above 0.5 in the total scale and in all subscales, which is similar to the results of validation of the main scale by Drossaert and van der Vaart and also the reliability of the questionnaire is consistent with a similar study by Alipour in the country. Limitations This study has limitations. One of the cross-sectional designs of the study was that causality between the variables cannot be examined. Due to the design of the study, the results of this study cannot be generalized to the entire student population of Iranian universities. In this study, self-reporting was used as a digital health literacy questionnaire, which is one of the limitations of the present study. In this study, digital health literacy was not measured using an instrument that tests functional health literacy. The low participation of some universities in responding to the questionnaire despite continuous follow-up was another limitation of this study. Therefore, similar studies are recommended to address these limitations and implement effective interventions by policymakers to increase the level of digital health literacy at the community level. Future recommendations The results of this study highlight the strengths and weaknesses of the level of digital health literacy of students and indicate that the provision of correct information by the Ministry of Health can improve the level of digital health literacy in different parts of society. Also, suggested to include health literacy and digital health literacy in university curricula as part of the health communication strategy. To improve lifestyles and implement healthy habits in the lives of citizens, especially students and young people, policymakers have to define the digital health literacy roadmap to address the existing deficiencies in this area. It is necessary to conduct further studies in this area by combining other factors such as Internet access, personal and Internet skills, social impact, access to facilities cost concerns, etc. that may impact digital health literacy.
This study has limitations. One of the cross-sectional designs of the study was that causality between the variables cannot be examined. Due to the design of the study, the results of this study cannot be generalized to the entire student population of Iranian universities. In this study, self-reporting was used as a digital health literacy questionnaire, which is one of the limitations of the present study. In this study, digital health literacy was not measured using an instrument that tests functional health literacy. The low participation of some universities in responding to the questionnaire despite continuous follow-up was another limitation of this study. Therefore, similar studies are recommended to address these limitations and implement effective interventions by policymakers to increase the level of digital health literacy at the community level.
The results of this study highlight the strengths and weaknesses of the level of digital health literacy of students and indicate that the provision of correct information by the Ministry of Health can improve the level of digital health literacy in different parts of society. Also, suggested to include health literacy and digital health literacy in university curricula as part of the health communication strategy. To improve lifestyles and implement healthy habits in the lives of citizens, especially students and young people, policymakers have to define the digital health literacy roadmap to address the existing deficiencies in this area. It is necessary to conduct further studies in this area by combining other factors such as Internet access, personal and Internet skills, social impact, access to facilities cost concerns, etc. that may impact digital health literacy.
The level of digital health literacy among Iranian university students was moderate, and female students have higher levels of digital health literacy than their male counterparts. The connection between sociodemographic status and digital health literacy was also significant. The findings of this study can be used by health policymakers to implement a digital health infrastructure. Also, suggested that higher education programs be designed to better prepare students for the era of technological change by creating more space for digital health literacy among healthcare students. On the other hand, understanding which factors can influence the digital health literacy of young people is one of the important questions for health decision-makers.
|
Neuropathological Features of Gaucher Disease and Gaucher Disease with Parkinsonism | 287ea7d8-20de-4f37-ae6c-73690b66f418 | 9147326 | Pathology[mh] | Gaucher disease (GD) is a lysosomal storage disorder resulting from mutations in the GBA1 gene that lead to decreased activity of acid β-glucocerebrosidase (GCase, E.C 3.2.1.45.). This enzyme cleaves the lipids glucocerebroside (GluCer) into glucose and ceramide , and glucosylsphingosine (GluSph) into glucose and sphingosine. Failure of this enzyme to clear these substrates from lysosomes causes macrophages to become engorged with lipid, giving rise to what are known as “Gaucher cells”. Typically, GD has been subdivided into three types based on presence and rate of progression of neurological involvement. However, GD can also be seen as a phenotypic spectrum due to the diversity of associated clinical manifestations, with the primary distinction being the degree of central nervous system (CNS) involvement . Type 1, or non-neuronopathic GD (GD1), has presentations ranging from asymptomatic adults to young patients with significant visceral or skeletal disease. The most severe type, acute neuronopathic or type 2 (GD2), is associated with progressive neurodegeneration and early lethality. The disease manifests before 6 months of age, and some cases may present perinatally with congenital ichthyosis or hydrops fetalis . Type 3 (GD3), or chronic neuronopathic GD, has neurologic involvement, particularly the presence of oculomotor involvement, that typically presents in early childhood with a slower progression. Even within GD3 there are multiple different phenotypes. Some patients have remarkable visceral and skeletal involvement with few neurological manifestations, while others may have learning disabilities, autism, generalized seizures, or progressive myoclonic epilepsy (PME) . In addition, slowing of the horizontal saccadic eye movements, discrepant verbal and performance IQ scores, and background slowing on EEG are frequently observed in GD3 . Over two decades ago it was appreciated that a small subset of adult patients with GD also develop parkinsonian features . Greater awareness of patients sharing these disorders led both to the identification of further patients in GD clinics around the world , and the observation that parkinsonism was also more frequent among relatives of GD probands . Subsequently, patients diagnosed with sporadic Parkinson disease (sPD) were also found to carry pathologic heterozygous variants in GBA1 . Ultimately, large multicenter studies confirmed that heterozygous GBA1 mutations is a genetic risk factor for both Parkinson disease (PD) and dementia with Lewy bodies (DLB), increasing the disease risk 5–10-fold, depending on the specific mutation . The risk of developing parkinsonism for patients with GD is not well established and varies between studies, from 9–12% at the age of 80 to a 20-fold increased lifetime risk . Importantly, a majority of GD1 patients never exhibit parkinsonian features, indicating that there is a more complex interplay underlying the neurodegeneration. In cell and animal models of GD, GCase deficiency is accompanied by neuroinflammation, evident by glial activation, as well as α-synuclein (α-syn) accumulation . However, the exact mechanism underlying GBA1 -associated PD remains unknown. Hypotheses include both gain-of-function due to promotion of α-syn aggregation, and loss-of-function leading to neurotoxic lipid accumulation, as well as a bidirectional feedback loop between GCase activity and α-syn aggregation, although no theory has been fully validated . To better understand the disease pathogenesis, we reviewed the neuropathological features associated with glucocerebrosidase deficiency, examining autopsy studies of rare patients with GD. The limited number of cases, especially in subjects with non-neuronopathic GD, highlight the need for standardization of examinations. In addition, we examined reports of neuropathologic studies conducted on patients with GD who developed parkinsonism and compared the findings to heterozygous GBA1 -mutation carriers with parkinsonism, which are more frequently examined. As uncertainty persists regarding the mechanism underlying GBA1 -associated synucleinopathy, an evaluation of neuropathological features associated with GCase deficiency, could provide clues into pathways contributing to the clinical features observed. Published reports of neuropathological evaluations of patients with GD are few, and most have been sporadic case studies including autopsy findings. The first larger and more comprehensive evaluation of the neuropathology of GD was published in 2004, examining autopsies of 12 patients with all three types of GD . Certain neuropathological features of GD were shared among individuals with each of the three types. The cell types most often affected were neurons and astrocytes, and distinct regional specificity was noted . The cerebral cortical layers 3 and 5, layer 4b of the calcarine cortex, and hippocampal areas CA2–4 were selectively involved in all forms of GD, although the extent of the abnormalities seen appeared to be dependent on the severity of the disease . Regions adjacent to the specific areas involved, including the hippocampal CA1 region and calcarine lamina 4a and 4c, were spared, emphasizing the specificity of neural involvement . The authors also demonstrated that in wildtype brain the pyramidal neurons in CA2–4 and cortical layer 5 showed intense anti-GCase immunoreactivity, suggesting that this region might be especially vulnerable to diminished GCase levels . Generally, a low level of background gliosis was observed, which was associated with the vasculature and most apparent in the brainstem and striatum. Perivascular clusters of Gaucher cells were identified in all cases, with a generally higher disease burden in GD2 and GD3 compared to GD1. GD1 has traditionally been defined as non-neuronopathic, and hence, no neurological symptoms or signs are evident. In 1980, Soffer et al. described autopsy findings of widespread perivascular clusters of Gaucher cells in cortical and subcortical regions in a 51-year-old man with GD1. While the affected blood vessels were surrounded by an intense fibrillary reaction, there was no neuronal loss and no accumulation of GlcCer in the brain. Importantly, despite the autopsy findings, the patient did not show any neurological symptoms . In the case series by Wong et al., the primary neuropathological features described in the seven cases with GD1 were astrogliosis and perivascular Gaucher cells . Affected brain regions in GD1 were described as gliotic with perivascular and fibrillary astrogliosis, evident by GFAP staining, but again, without prominent neuronal loss . Hippocampal involvement was most prominent in the CA2 region, modest in CA3–4 and CA1 was typically spared. Hulková et al. examined the frontal cortex and cerebellum in a 59-year-old woman with GD1 and reported occasional perivascular Gaucher cell clusters in white and grey cerebral matter and in the leptomeninges. Astrocytosis was noted in the white matter and subpial regions with mild gliosis in the dentate gyrus. In addition, lipofuscin particles were noted in Purkinje cells, Bergman astroglia, and cortical neurons, further documenting mild neuropathological involvement in GD1, without clinical neurological symptoms . Neuronopathic GD (nGD) encompasses GD2 and GD3, which both affect the central nervous system in several ways. Elevated levels of brain GlcCer and GlcSph occur in both types, although levels tend to be higher in GD2 . Gaucher cells are also found in the brains of patients with both types, but there is some indication that their localization differs. In GD2, there can be free Gaucher cells in the cerebral cortex, with or without additional perivascular Gaucher cells. In GD3, however, perivascular Gaucher cells were predominant . However, there is at least one case report of a patient with GD3 who was also found to have free parenchymal Gaucher cells . Four neuronal alteration patterns have been suggested in patients with nGD: (1) mild and nonselective, (2) cerebellodentate, (3) bulbar, and (4) thalamocortical. Patterns (3) and (4) are common in patients with GD2, while pattern (1) is more characteristic of patients with the Norbottnian subtype of GD3 and pattern (2) is observed in other GD3 cases . As described above, pyramidal neuronal loss in hippocampal layers CA2–4 is observed in nGD, with CA2 being the region most severely affected and CA1 largely spared . Additionally, cortical laminar necrosis of the third and fifth cortical layers occurs in conjunction with astrogliotic neuronal loss of the fourth layer, though fourth layer abnormalities are largely localized to the occipital lobe . Clinically, the initial distinction between GD2 and GD3 is the age at symptom onset. GD2 is diagnosed perinatally or in infancy while GD3 can present at any age, but often has a later diagnosis. GD2 has some unique presentations, including hydrops fetalis, congenital ichthyosis, severe stridor, and failure to achieve an independent gait . In the case series by Wong et al., hippocampal involvement in GD2 was particularly severe, with significant neuronal loss. The few remaining hippocampal CA2 neurons observed were described as basophilic and shrunken . The finding of particularly severe gliosis in CA2 in a GD2 case was also reported by Kaga et al. in an infant who died at six months. In this child, Gaucher cells were found both in the perivascular regions of the cerebrum and in the brainstem. Neuronal loss was observed in the brainstem, especially in nuclei of cranial nerves III, V, VII, and the superior olivary complex. The dentate nucleus, as well as the granular layer of the cerebellum, were lost . An early study by Kaye et al. studied neuropathological differences between patients with GD2 and GD3. In the two cases of GD2 studied, GlcCer accumulation, Gaucher cells, gliosis, and microglial nodules were observed, and the level of GlcCer accumulation correlated with degree of neurodegeneration. The one case of GD3 reported displayed a similar pattern of GlcCer accumulation, but surprisingly lacked the other neuropathological findings . Other studies have, however, found neuropathological changes in GD3, possibly reflecting the clinically diverse phenotypes collectively associated with GD3. In another case, a 10-month-old girl, clinically diagnosed with GD 3 and progressive stimulus-sensitive myoclonus, as well as bulbar signs, was studied. The patient showed widespread focal intraparenchymal Gaucher cells in cerebral cortex, mostly evident in lamina 4, as well as in the granular cell layer of the cerebellum. GFAP immunoreactivity indicating astrogliosis was increased in lamina 4 and, to a lesser extent, in lamina 2 in cortical samples, where mild to moderate neuronal loss was also evident. The pons, medulla oblongata, and substantia nigra (SN) all showed glial scars. In addition, severe loss of neurons and astrogliosis in the dentate nucleus and some loss of Purkinje cells were observed. Brain GlcCer levels were elevated both in the frontal cortex and cerebellum. While the clinical diagnosis was reported as GD3, the authors concluded that the neuropathological findings were a combination of the patterns expected in GD2 and GD3, highlighting the phenotypic spectrum in nGD . Several studies of patients with GD3 have suggested that the dentate nucleus is the region most severely affected . An autopsy report of a child with severe GD3 with a progressive generalized stimulus-sensitive and action myoclonus and cerebellar ataxia showed selective neurodegeneration of the cerebellar dentate nucleus and dentatorubrothalamic pathway . The remaining neurons of the dentate nucleus showed signs of pyknosis and nuclear condensation. Loss of myelin and axonal profiles were also present in this neuronal population, along with a reduced number of fibers extending from the dentate nucleus. The fiber loss was selective to the superior cerebellar peduncle which includes the dentatorubrothalamic pathway . Interestingly, neuronal populations in other brain regions, including the cerebral and cerebellar cortices, thalamus, basal ganglia, and inferior olivary nucleus, did not show evidence of decline or damage. There was only one focal ependymal lesion with infiltration of Gaucher cells observed, and specifically no loss of Purkinje cells. The authors concluded that the restricted dentate damage supports a central role of this nucleus in myoclonus. Furthermore, Alzheimer’s type 2 astrocytes were located in the basal ganglia, substantia nigra, and inferior olivary nucleus, implicating a systemic metabolic disorder . In another early case report of a patient with stimulus-sensitive myoclonus, cerebellar ataxia, and general seizures, Winkelman and colleagues reported somewhat similar neuropathological findings, where the deep nuclei of the cerebellum were most severely affected. While there was no loss of Purkinje cells, mild astrogliosis was observed in the molecular layer of the cerebellar cortex. However, in this case signs of neuronophagia in the brainstem and multiple perivascular aggregates of Gaucher cells in the subcortical white matter were evident . Burrow et al. performed a thorough neuropathological evaluation of a twelve-year-old child with GD3 who had been treated with enzyme replacement therapy for 11 years. Clinically, the child developed a cerebellar tremor, myoclonus, progressive ataxia, and generalized tonic–clonic seizures. At autopsy, isolated and nodular clusters of CD68 positive macrophages were seen throughout the cerebrum, often compressing arterioles. These perivascular clusters were also found in the basal ganglia, brainstem, hippocampus, cerebellum, and thalamus. Again, neuronal loss was prominent in the cerebellar dentate nucleus. A marked loss of Purkinje cells was noted. Diffuse astrogliosis was observed, often surrounding engorged macrophages. Phosphorylated Tau was identified in neuronal soma and processes in the hippocampus, basal ganglia, and cerebellum. In addition, rare cells in the cortex and hippocampus showed enhanced α-syn immunoreactivity. Thus, despite the patient’s young age, there were markers suggestive of a potential neurodegenerative disorder . GD3 includes the “Norbottnian” subtype, named for a geographic isolate in northern Sweden where it was first described. This subtype, which is generally associated with GBA1 genotype L444P/L444P, is characterized by infantile or juvenile onset, with slow progression of CNS involvement . Conradi et al. conducted a morphological and biochemical analysis of five GD3 brains from this cohort, demonstrating Gaucher cells in each case. In two cases, loss of neurons and myelin near the Gaucher cells was observed. Varying degrees of neuronal loss, satellitosis (clustering of glia around neurons) , and neuronophagia were noted in all five patients. Light microscopy demonstrated lipofuscin with simple and complex lipids, but not glycolipids. Inclusion bodies were seen in both cerebral and cerebellar neurons, the dentate nucleus, and pons. GlcCer accumulation was present in these cases, although the levels varied. They tended to be higher in patients who had undergone splenectomy and were affected by the generalized lipid storage processes in specific individuals. Higher levels of GlcSph were noted in cases with more advanced nerve cell loss. This led to the suggestion that the accumulation of lipid substrates may act to prime a neurodegenerative process , which is one hypothesis proposed regarding why some patients with GD develop LBD . The unanticipated link between the monogenetic disorder GD and the multifactorial neurodegenerative disorder PD has blurred the boundaries between genetic and sporadic Lewy body disorders (LBDs). Similar to sporadic LBD (sLBD), patients with GD who develop parkinsonism (GD-LBD) have a wide spectrum of phenotypes, ranging from slowly progressing L-DOPA responsive PD to rapidly progressive dementia with Lewy bodies (DLB) presentations . While, on an individual basis, patients with LBD carrying GBA1 mutations are clinically indistinguishable from those with sporadic disease, as a group, patients with parkinsonism who are either homozygous or heterozygous for GBA1 mutations, have an earlier age of onset, faster progression, and more pronounced cognitive decline than those without mutations . While the literature describing the neuropathology of heterozygous GBA1 -carriers is expanding , there are only a few published neuropathological evaluations describing findings in homozygous patients with both disorders (summarized in ). Unlike some of the other familial PD-related genes, at autopsy patients with GBA1 -LBD regularly exhibit Lewy body (LB) pathology, mirroring a core neuropathological feature in PD and DLB . LBs are neuronal perikaryal deposits mainly composed of misfolded α-syn. In addition, more than 80 different proteins, membranes, lipids, and distorted organelles have also been identified in these aggregates . There are two subtypes of LBs, the classical brainstem type and the cortical type, each with a different localization, as well as a different microstructure, which affects the likelihood of their identification during the neuropathology examination . Since GBA1 -LBD shares essential histopathological features of LBD, the histopathological signature in GD-LBD could potentially provide insights into pathophysiology relevant to a larger group of affected patients. In the early 2000s, Tayebi et al. published a case series describing patients with GD-LBD, suggesting that GCase deficiency may cause patients to be more vulnerable to parkinsonism. Brief neuropathological descriptions were included for four of the cases. Each exhibited a loss of dopaminergic neurons in the substantia nigra pars compacta (SN), the pathological hallmark for PD, as well as LB pathology, although the distribution of LBs varied among the patients. Specifically noted by the authors were brainstem-type LBs in the hippocampal regions CA2–4, sparing CA1 . As noted above, these regions are specifically affected in GD. One year later, Wong et al. published additional neuropathological description of the same patients. Each of the GD-LBD cases exhibited astrogliosis in hippocampal areas CA2–4, the calcarine cortex layer 4b and the cerebral cortex layer 5, as reported in GD1 cases without parkinsonism. The SN showed neuronal loss, brainstem-type LBs, and gliosis. Two of the cases also had brainstem-type LB pathology in hippocampal pyramidal neurons, and in a third, brainstem-like LBs were limited to the SN. The fourth case had both brainstem and widespread cortical LBs consistent with diffuse LBD . The included cases had different GBA1 genotypes, indicating that no specific mutation predisposes patients to LBD. Of the two homozygous N370S patients, only one had hippocampal LB involvement . Even though neuropathology is the gold standard for the diagnosis of neurodegenerative disorders, there is, to date, no neuropathological criterion separating DLB from PD with dementia (PDD) . DLB cases tend to have a larger LB burden, especially in the temporal lobe and CA2 region of the hippocampus, as well as elevated Alzheimer’s disease-related pathologies compared to PDD . The degree of Lewy pathology in the hippocampal CA2 region has also been linked to cholinergic depletion and dementia development in PD . It is tempting to correlate the involvement of Lewy pathology in hippocampal regions in GD-LBD with the more pronounced cognitive decline seen in GBA1 -PD patients, but in asmuch as since hippocampal involvement is also observed in LBD without GBA1 -mutations, more cases need to be examined to conclude whether the spread of LB pathology differs from sporadic cases . Another source of neuropathological studies of GD-LBD now results from the inclusion of cases of GD in large autopsy series performed on subjects with PD. After genetic screening of a pathology cohort of more than 1200 patients with neurodegenerative disease, Blauwendraat et al. identified one case who was homozygous for GBA1 N370S, as well as heterozygous for LRRK2 G2019S. The patient presented clinically with PD and showed neuronal loss in the SN but exhibited no Tau or LB pathology . Neuropathological reports of LRRK2 G2019S carriers have heterogeneous results with regards to LB pathology, possible explaining the lack of LB pathology in this case . Furthermore, Adler et al. examined 12 GBA1 -carriers with PD looking to establish the neuropathological differences between GBA1 -PD and sPD. This series included one case with genotype N370S/N370S, and hence GD-PD, but no individual information regarding neuropathological findings or comorbidities in this subject was reported . It is still unclear whether there are histopathological differences between GD-LBD and sLBD. Several studies suggest a more widespread cortical LB burden in GBA1 -PD, although this remains under debate . While, as mentioned, there was one report of increased α-syn in both the hippocampus and cortex in a 12-year-old patient with GD3, no α-syn pathology was detected in five infants with GD2 who had widespread Gaucher cells in the CNS. This indicates that α-syn can accumulate early but suggests that GD disease burden does not directly correlate with α-syn pathology . Since incidental LB pathology is found in approximately 10% of healthy people above 60 years old, α-syn accumulation in single cases should be interpreted with caution as no clinical features of parkinsonism were detected . On a molecular level in a small cohort, Goker-Alpan et al. showed that GCase was present in LBs specifically in GBA1 -LBD. In patients with GD-LBD over 80% of LBs stained positive for GCase, 33–90% in heterozygote carriers and <10% in sPD . This suggests a role for mutant GCase in LB formation in GBA1 -related disease, but as mentioned above, a large number of proteins have been found in LBs, many without a documented role in the pathophysiology. Furthermore, the variability in heterozygotes implicates additional individual stressors and/or protective factors. Unlike in nGD, in the few cases of GD-LBD investigated, as well as in heterozygous GBA1 -carriers, no significant GluCer or GlcSph accumulation was observed in the CNS . Slightly increased GluCer has been reported in the SN in sPD, although the significance of this finding is unclear . The rare synucleinopathy multiple system atrophy (MSA) is marked by α-syn inclusions in oligodendrocytes as opposed to neurons. Whether there is a link between GBA1 and MSA development is still not settled . Interestingly, one case of autopsy-verified MSA was incidentally found to be homozygous for N370S but was never diagnosed with GD during life. Examination showed atrophy with neuronal loss and gliosis of the basal ganglia and cerebellum, indicating a mix between the major subtypes of MSA, MSA-P (parkinsonian), and MSA-C (cerebellar). The patient had glial cytoplasmic inclusions with α-syn, indicative of MSA, and rare neurofibrillary tangles, seen in Alzheimer’s disease, but no LB pathology was evident . The appreciation of the link between GBA1 and the LBDs has stimulated an upsurge in GBA1 -related research activity. However, the field is hampered by the dearth of well-documented autopsy studies, and the full spectrum of neuropathological findings associated with GD has yet to be established. Many of the reported cases exhibit astrogliosis in cortical layers 3 and 5 and in hippocampal regions CA2–4. Selective neuronal loss is described in neuronopathic GD. Rare autopsy studies of GD3 show limited depletion of neurons in the dentate nucleus. Findings regarding loss of Purkinje cells are conflicting, as is the degree of Gaucher cell infiltration. The variability in pathological findings likely reflects the recognized clinical heterogeneity in GD. However, it should be noted that these pathological patterns are based on very few cases and certainly do not cover the entire phenotypic spectrum of GD. Furthermore, the specific brain regions examined, and the staining techniques used vary among publications, limiting direct comparisons. Brain accumulation of GlcCer and GlcSph in nGD has been observed. While generally seen as toxic and causative of neurodegeneration, this does not provide an explanation regarding the selective neurodegeneration which occurs as a response to the systemic increase. In control brain, Wong et al. demonstrated increased levels of GCase localized to brain regions specifically affected in nGD . Studies investigating the causes of the cell type specific vulnerability could increase our understanding of the role of the implicated lipids in LBD and other neurodegenerative diseases, including Alzheimer’s disease. Several of the original neuropathological studies in GD patients were published before the link to LBD was known, and therefore, many of the cases were not specifically examined for α-syn pathology. Interestingly, the GD-affected hippocampal areas CA2–4 have also been specifically involved in GD-LBD. Hippocampal involvement is common in sPD according to Braak staging . Recently, the question of whether Braak staging is applicable to all subtypes of PD has been raised, and hence, more studies of various well-defined PD cohorts are needed . While neuropathological assessments of GD-LBD are quite limited, the similar LB pathology reported suggests that GD-LBD might provide a relevant model to understand cellular pathways generally relevant to LBD. Careful GBA1 genotyping of brain bank PD series could potentially lead to the identification of further cases. However, without a better understanding of GD-related pathology, potentially important subtle differences between GD-LBD and sPD that could provide critical mechanistic clues might be overlooked. |
One hundred years of phase polymorphism research in locusts | 87d90175-b8c4-45dd-9bef-9f936b052f15 | 8079285 | Physiology[mh] | Already in the bible, in book Exodus, locusts are reported as one of the plagues of mankind. Who would have guessed that in the twenty-first century, despite of all technical and cultural progress, locusts are still threatening millions of people by destroying their fields and crops, and that swarms may comprise several hundred-millions of individuals and affect areas more than 1000 km 2 . A Russian entomologist, Boris Petrovitch Uvarov (1886–1970), carried out field work from 1911 to 1914 in the Caucasus to solve pending questions surrounding two species L. migratoria and L. danica . The first world war from 1914 to 1918 prevented Boris Uvarov from publishing his results, and it was only in 1921, one hundred years ago, that he did so in the Bulletin of Entomological Research (Uvarov ). In this publication he described and compared the two species Locusta migratoria and Locusta danica with respect to general and genital morphology, colouration, sexual dimorphism, biology and ecology. In 1912 he and his co-workers collected L. migratoria from a big swarm near Sevastopol and studied their offspring in 1913. In his publication (Uvarov ) he states: “it became evident that although the bulk of the larval swarms consisted of migratoria, there were many individuals which were certainly danica, these being different in colouration and showing a tendency to desert the swarms. In the summer of 1913 several specimens of both sexes of very typical danica were isolated in cages, in which copulation and oviposition took place; the eggs hatched without hibernation, as is not uncommon with danica, but so far as we know never occurs in migratoria”. Uvarov was also able to examine specimens given to him by his friend V. Plotnikov who carried out breeding experiments at the Turkestan Entomological station in Tashkent. Plotnikov’s description of the larvae and adults bred from them was as follows: “the larvae had in the first stage a dark grey colouration, and not black as in migratoria. In later stages they acquired various colourations—uniformly green, dark grey or brownish—but a number of them had the typical colouring of migratoria, namely, a general reddish-brown colour (sometimes greenish), with velvety black stripes (broad or narrow) along the sides of the pronotal keel and black stripes on the sides of the abdomen. The adults presented no characters typical of danica; the profile of the pronotal keel was usually straight, sometimes even concave. The males were, however, smaller than the females”. Uvarov , after studying Plotnikov’s specimens, writes: “I can only confirm Plotnikov's statement that while the parents are all very typical danica, save that not all of them have the hind tibiae red (which character is not quite constant in that form), their direct offspring are on the contrary all well-defined migratoria, though a few of them have the tibiae red, as is sometimes the case in this form”. More of such experiments and some detailed geographical considerations brought Uvarov to formulate his theory of phases. He states: “that the three forms cannot be separated specifically and that they represent taxonomic units of lower grade than the species, which must be called, according to the law of priority, L. migratoria L. They are, however, quite distinct from each other, though connected by transitional forms”. He argues against using the term “morpha” (morph) and suggests “phase” instead. Well ahead of his time he also suggests for the control of locust outbreaks that “the theory of phases suggests the theoretical possibility of the control of migratoria by some measures directed not against the insect itself, but against certain natural conditions existing in breeding regions which are the direct cause of the development of the swarming phase.” Therefore, it is due to the studies of Boris Uvarov , and his friend V. Plotnikov, that the appearance of locusts in two phases, the solitarious and the gregarious one, was recognized for the first time. Later in England, Sir Boris Uvarov (Fig. ) continued his studies on locusts (Uvarov ) as the director of the Anti-Locust Research Centre in London from 1945 to 1964.
The reason, why L. migratoria and L. danica may have been considered two separate species were differences with respect to colouration of adults and nymphs (Fig. ), behavioural differences with L. danica nymphs avoiding gregarization and L. migratoria nymphs joining other hopper bands, and differing preferences of breeding places. However, male genitalia, a morphological feature employed for exact species determination in insects, were of very similar shape, and artificial breeding experiments yielded nymphs with features of L. danica that had hatched from L. migratoria egg pods (Uvarov ). Nowadays it is clear that approx. 20 species of grasshoppers world-wide, taxonomically not closely related in most cases, may have the potency to form swarms and exhibit different phases (a phenomenon now called polyphenism, meaning the same genotype exhibiting several phenotypes; Simpson and Sword ; Pener and Simpson ). The species studied most are the migratory locust, Locusta migratoria , and the desert locust, Schistocerca gregaria . Detailed studies in both species, revealed that all kinds of intermediates between the phases can be found and that transitions between phases can occur at any stage and in both directions, all depending on the density of populations (Pener ; Pener and Simpson ). Each of the 20 species must be regarded separately with respect to phase changes as clear species-specific differences and even geographical variants within one species may exist (Pener and Simpson ). For example, solitarious individuals of Schistocerca gregaria possess six larval instars (stages as nymphs) whereas gregarious ones only have five larval instars. This is different in the migratory locust, Locusta migratoria , where both phases seem to have the same number of instars. In the desert locust ( Schistocerca gregaria ), very obvious differences between the phases exist. Solitarious desert locusts are shy, are more sedentary and do not move much, and with their green or brownish cryptic colour hide during the day. They also avoid other locusts, except for mating of course, and if they migrate, they are reported to fly at night (Uvarov ). In contrast, gregarious desert locusts are not cryptic at all but reveal anti-predator warning colours in bright yellow and black. Gregarious animals are very active, and they aggregate both as nymphs (“marching hopper bands”) and adults (groups, swarms), and they fly by day and roost over-night in trees. Differences are also observed between grooming frequency, resting time and the time spent near a group of other locusts (Rogers et al. ). In addition, walking speed differs as well as the walking posture: solitarious walk slowly with body held low to ground whereas gregarious walk rapidly with body held high over ground and hind legs also exhibit different trajectories during walking. A most interesting difference was described by Simoes et al. with respect to olfactory learning: 70% of all tested locusts preferred vanilla odour to lemon. If vanilla was now paired with an aversive stimulus, only 34% of solitarious locusts still preferred vanilla in contrast to the majority of gregarious locusts, 59%, that still preferred vanilla. Even more astonishing is that solitarious locusts avoid feeding on Black Henbane ( Hyoscyamus niger ) which contains the alkaloid hyoscyamine whereas gregarious locusts feed on this plant. Hyoscyamine, therefore, acts either aversive or appetitive depending on phases (Despland and Simpson , ). If food containing hyoscyamine is paired with an odour in an olfactory learning experiment, solitarious locusts develop a strong aversive memory for the paired odour in contrast to gregarious locusts (Simoes et al. ). Therefore, an aversive memory for an odour is eased when a locust becomes gregarious. It is now clear that polyphenism is a multifactorial phenomenon which is not uni-causal but involves changes in morphology, anatomy, colouration, development, reproduction, physiology, biochemistry, molecular biology, behaviour as well as all aspects of ecology including chemical ecology (Hassanali et al. ; de Loof et al. ; Despland and Simpson ; Simpson and Sword ; Pener and Simpson ; Cullen et al. ), and all these changes may have different time scales with those of behaviour being the fastest to occur (Ayali ).
A very good test for phase was designed by Simpson et al. and later systematically applied by Anstey et al. in which the locusts entered an arena which had a group of other locusts displayed on one side and on the other side an empty space. The choice of the individual locust in its approach to either side ultimately provided a “phase score” of solitariousness, or gregariousness respectively. From previous observations it was clear that one of the major stimuli in the transition of phases were tactile and olfactory stimuli. Indeed, crowding a solitarious individual with other nymphs, or regularly touching the hind femur of such a nymph, or exposing the nymph to the smell and sight of other nymphs caused the tested individuals to turn from solitarious to gregarious (Rogers et al. ; Anstey et al. ).
Tactile stimuli were shown to increase serotonin concentrations in thoracic ganglia (Burrows et al. ), and either injecting serotonin or precursors, or receptor agonists into solitarious locusts induced gregarious behaviour. Correspondingly, injection of serotonin receptor antagonists or synthesis inhibitors prevented solitarious locusts to become gregarious (Anstey et al. ). Other biogenic amines are affected as well: Isolating Schistocerca gregaria increased dopamine levels in the brain (Alessi et al. ). Previously, Ma et al. found in a genome-wide gene expression profiling in solitarious and gregarious nymphs of Locusta migratoria that catecholamine pathways, particularly those of dopamine, are upregulated in the gregarious phase. For Locusta migratoria, Guo et al. report that dopamine signalling via Dop1-receptors seems to play a role in gregarization whereas signalling via Dop2R mediates the opposite, solitarious behaviour. With respect to tyramine and octopamine, tyramine titres decrease when Schistocerca gregarious nymphs are isolated (Rogers et al. ) and octopamine titres decrease in the metathoracic ganglion when adult gregarious Schistocerca are isolated (Alessi et al. ). When body volatiles of Locusta migratoria are tested (Ma et al. ), solitarious locusts when crowded are attracted and gregarious locusts when isolated are repelled. Both, tyramine and octopamine signalling were correlated with this behavioural switch. Using RNAi-mediated knockdown of receptors, Ma et al. showed that OARa and TAR are involved. Activation of OARa signalling in solitarious locusts caused a behavioural shift from repulsion to attraction. Enhancement of TAR signalling in gregarious locusts resulted in a behavioural shift from attraction to repulsion. Interesting differences were also found with respect to brain size: gregarious Schistocerca have a 30% larger brain size, for example larger optic lobes, despite their smaller body size compared to solitarious individuals (Ott and Rogers ). In solitarious Schistocerca the antennal lobes are larger as they possess more olfactory receptor neurons than gregarious ones (Anton and Rössler ).
An impressive number of publications deals with the search for pheromones, in particular aggregation pheromones (see reviews of Pener and Yerushalmi ; Ferenz and Seidelmann ; Hassanali et al. ; Pener and Simpson ; Cullen et al. ). Others deal with differences between the phases in their chemosensory equipment and olfactory processing of pheromonal signals (Ochieng et al. ; Anton et al. ). It is now clear that chemical ecology plays an important part in the life of locusts and that a bouquet of various odorants may act at different times during their development and adult maturation and, therefore, may have different effects at different times. Substances were isolated from the body of locusts, from faeces, from egg pods or from the soil in which mass oviposition took place, and then tested by gas chromatography together with mass spectrometry or in some cases also with electroantennograms. Thus, locusts possess a bouquet from odorants such as hexanal, octanal, nonanal, decanal and the corresponding acids including valeric acid (pentanoic acid) and faecal compounds are guaiacol, phenol and indole. Most interesting are volatiles of adult solitarious or gregarious Schistocerca males. In both phases, anisole, benzaldehyde, guaiacol and phenol were identified and vetratrole was present in trace amounts in solitarious males (Njagi et al. ). The major compound of gregarious Schistocerca males and absent from solitarious males was phenylacetonitrile (PAN) or benzyl cyanide. Detailed analyses by Seidelmann et al. and Seidelmann and Ferenz showed that this substance is released from wings and legs and can be considered a mating regulator or “courtship inhibiting pheromone” as it prevents additional mating between females and other males. In a different species Schistocerca piceifrons , Stahr and Seidelmann showed that females preferentially mate with males emitting a high concentration of the volatiles phenylethyl alcohol (PEA) and ( Z )-3-nonen-1-ol (3-NOL) and this also affected successful hatching of their larvae. Recently, a careful study of Guo et al. identified 4-vinylanisole or 4-methoxystyrene as an aggregation pheromone in Locusta migratoria . This substance is both attractive for nymphs and adults and seems to act via particular odour receptors (OR35). From the above it is clear that locusts, solitarious and gregarious, experience a wealth of chemical cues which are used for communication much more complex than previously anticipated.
The final and latest studies on locusts are concerned with genes and their differential expression in the different phases. Such studies suffer from the fact that the genome of Schistocerca gregaria so far is the largest insect genome sequenced and assembled to date. In total, 18,815 protein-encoding genes are predicted in the desert locust genome, of which 13,646 (72.53%) obtained at least one functional assignment based on similarity to known proteins (Verlinden et al. ). In Locusta migratoria , a similar large number of genes, 17, 307, was predicted (Wang and Kang ; Wang et al. ). In a study of the transcriptomes of solitarious and gregarious Locusta migratoria, 214 transcripts exhibited differences (more stress response related genes in gregarious adults, more oxidative stress resistance genes in solitarious adults; Badisco et al. ). Upregulated genes in gregarious locusts were heat shock proteins, proteins which give protection from infections, greater abundance of transcripts for proteins involved in sensory processing, nervous system development and plasticity, and in general, genes that play a role in stress responses. Upregulated in solitarious locusts were genes related to anti-oxidant systems, detoxification, anabolic renewal, and in general, protection against slowly accumulating effects of ageing. Such genomic studies could be very interesting in the genus Schistocerca as many species live in North- and South America and only a few of these show polyphenism and the potency to form swarms. The different lifestyles of these closely related species may offer a very good opportunity to identify more genetic or epigenetic factors relevant for polyphenism (Ernst et al. ).
Research of Boris Uvarov in Russia published 100 years ago still impacts on today’s entomologists, neurobiologists, and ecologists, because locust swarms still threaten the nutritional basis of millions of people in many parts of the world. As recent outbreaks in Eastern Africa indicate, these problems might increase in the future due to climate changes affecting ecosystems and changes due to agricultural land use. Unfortunately, the affected areas belong to the most fragile and vulnerable landscapes. In addition, civil wars, ethnic unrest and often a lack of overseeing government add to the still pending problems with locusts (Meynard et al. ). It is now clear that phase polyphenism can have many causes and is a multifactorial process. Based on the pioneering work of Boris Uvarov we have acquired a much better knowledge of phase polyphenism but our understanding is far from complete. Definitely, this merits further research, particularly with respect to now available methods that allow addressing influences of epigenetic modifications, changes in the microbiome, and for performing modelling studies at the level of populations (Wang and Kang ; Ernst et al. ; Ayali ).
|
Autopsy histology data suggest cirrhosis is frequently under-reported on death certificates | 78aa1aa6-01a1-4520-a8be-bd211c8e61da | 9851674 | Forensic Medicine[mh] | Chronic liver diseases are common and can lead to cirrhosis. Cirrhosis is frequently asymptomatic and is often diagnosed at a late stage. However, cirrhosis is associated with tremendous morbidity, mortality, and impaired quality of life. – Thus, accurate reporting of cirrhosis in healthcare registers is important to track changes that can guide health interventions. However, such data relies on the correct diagnosis of cirrhosis on reports such as death certificates. It is not known how common undiagnosed cirrhosis might be at death, or how frequently it is correctly reported as a cause of death, on death certificates. Downstream consequences of cirrhosis, such as infections or fractures due to sarcopenia, etc., might not be accurately identified as such when physicians fail to register cirrhosis. Here, we aimed to estimate how often cirrhosis is found on autopsy reports and compare this to data from official death certificates. The study was approved by the Stockholm Ethics Review Board (No. 2014/1287-31/4). Because this is a register-based study using anonymized data and no patient contact, the Ethics Review Board waived informed consent. We identified 42,495 patients with a liver biopsy performed after death using validated Swedish registers. – . Of these, 6187 had cirrhosis as defined by biopsy. Of these, 2523 (41%) did not have a diagnosis corresponding to cirrhosis on their death certificate (Table ). Patients without a diagnosis of cirrhosis on their death certificate were older compared to patients where this was not reported (median 73 vs. 67 years, p <0.001), but there was no difference in sex (65% males vs. 63%, p =0.24). The test characteristics of a diagnosis of cirrhosis in the causes of death register, using data from the liver biopsy as gold standard, were: sensitivity=59.2%, specificity=94.1%, positive predictive value=63.0%, negative predictive value=93.1%. In total, 2717 patients (43.9%) of patients with biopsy-confirmed cirrhosis did not have a known liver disease, such as alcohol-related liver disease, recorded in the National Patient Register at any time prior to death. Undiagnosed liver disease was more common in patients without cirrhosis recorded on the death certificate (n=1803, 71.5%), compared to those where cirrhosis was also reported on the death certificate (n=914, 24.9%, p <0.001). That is, in patients where liver disease was known prior to death it was more commonly recorded on the death certificate. Further, we found that the proportion of patients with unrecorded cirrhosis was stable over time (42% in the 2011–2017 period), but that the absolute number of autopsies declined in the later decades (Table ), in accordance with previous studies. In those without cirrhosis noted on the death certificate, the most commonly reported causes of death were cardiovascular diseases (49%) or tumors (19%). Thus, we show that in patients with autopsy-confirmed cirrhosis, over 40% do not have mention of cirrhosis on the final death certificate. In most of those where cirrhosis was not reported on the death certificate (72%), no known liver disease was known prior to the autopsy, whereas known liver disease was found in the majority of those where cirrhosis was reported on death certificate (75%). This signals a problem with the sensitivity of death certificate reports to accurately classify presence of cirrhosis. A plausible explanation is that updated death certificates (which are not mandatory) were not submitted by the clinician receiving the pathologists report with the information of cirrhosis. Another explanation could be that the cause of death was considered completely unrelated to cirrhosis, although this seems unlikely based on the top 5 causes of death identified in the examined groups since cirrhosis is highly associated with such diseases. We also cannot rule out selection bias, for instance patients that die due to known cirrhosis might not have been subjected to autopsy. These novel results highlight that even in a highly organized country such as Sweden with extensive registers, cirrhosis is frequently under-reported on death certificates, and the autopsy often the first time point where cirrhosis is recorded. In spite of acknowledgment of cirrhosis at the autopsy, the death certificate frequently fails to capture this. This is problematic since such reports form the basis of much epidemiological research on disease trends, and the proportion of deaths where autopsies are performed is declining. Recent studies show an increase in the mortality of liver diseases globally. Coupled with a marked reduction in the number of autopsies performed, this figure could be understated, and cirrhosis might be a more important contributor to death than previously thought.
|
Development of a patient-centred tool for use in total hip arthroplasty | 35751793-8462-46fc-b22f-34fe3429b8d0 | 11500863 | Patient-Centered Care[mh] | A major implication of replacing a joint by an implant–in this case a total hip arthroplasty (THA)–is that the person will be living with the implant for the rest of his/her life. Patients are keen to be informed about the benefits and harms of the intervention and how these evolve over the long-term. Patients about to undergo THA would benefit from the experiences of people similar to them, who have already had the surgery and lived with the prosthesis. However, the systematic long-term documentation of outcomes remains the exception rather than the rule in clinical practice, thus limiting the structured knowledge patients and clinicians can gain from previous experience. Even when it is available, the knowledge gained is rarely shared with patients but stays with the clinician or is only disseminated through scientific publications. Patients’ information needs are broad, seeking information from the clinician, but also from interpersonal sources, especially people like themselves (e.g. family members, friends) and/or through information material (leaflets, internet, television, social media, print media) . The latter is in general broad and not specific to the patient’s individual circumstances. Uncertainty regarding the experience of disease, treatment, prognosis, and specific risks related to one’s general health status are associated with stress and high emotional pressure . Knowing what to expect before surgery and be offered shared decision-making positively influences patients’ outcomes and satisfaction with care after arthroplasty . This requires clinicians to individualize their explanations to each patient, or at least to groups of patients that share similar attributes. This is typically done with the help of prognostic tools such as clinical prediction scores, risk stratification tools, or risk calculators derived from analyses of large datasets (administrative data, electronic health records, registries or cohorts). The aim and ambition of these prognostic tools is to provide individualised predictions, on one or more key outcomes, typically short-term, to inform patients about the likely benefits and harms from surgery . Prediction models have become popular in recent years to inform clinical practice . However, these models have often failed to predict the outcomes of orthopaedic surgery , sometimes due to few numbers of events or missing data on outcomes or predictors in the available databases. In general, the clinical impact of prediction models remains unproven . A second type of tools is focused on summarising the evidence. These are often disseminated in scientific publications, such as systematic reviews or clinical practice guidelines (e.g. clinical practice guidelines ) but can also come in the form of patient decision aids, either printed or through online platforms or applications (e.g. Magic App, RECOVER-E) [ – ]. Decision aids are designed to support shared decision making either by preparing patients to interact with clinicians, or to enhance the conversation during the clinical encounter . Decision aids aim to present the evidence on benefits and harms, their uncertainties, as well as practical issues, in a format that is understandable and intuitive to patients [ – ]. None of the current tools for prediction or decision-making cover relevant benefits and harms over the short-, mid- and long-term after THA, after matching patients’ profiles, and report on their needs, interests, and concerns. The aim of this project was to produce an information tool to enable patients and clinicians to benefit more directly from previous patients’ experiences with THA. This was done by seeking out patients’ views on what is important for them, leveraging corresponding registry data and producing outcome information perceived as relevant, understandable, adapted to a specific patient’s profile, and readily available.
Registry data Essential condition for the project was the existence of a dataset with the relevant predictor and outcome information over the long-term. The different steps undertaken for the tool creation are summarized in . The Division of Orthopaedics at the Geneva University Hospitals established an institutional arthroplasty registry (GAR) in 1996. The institution is a large tertiary public hospital in a high-income country with universal health care coverage. The registry continuously and systematically collects detailed information about patients’ characteristics before surgery, about the surgery itself, and about the patients’ short-, mid-, and long-term experiences with their hip prosthesis. These data were the basis for the information tool. The registry has been described in detail elsewhere . Patient interviews and survey To capture patients’ interests, needs, and concerns, we contacted 379 randomly selected patients from the registry either just about to undergo or having already undergone primary elective THA. Patients were invited to participate in a survey from 13 February to 17 September 2020. The questionnaire was specifically developed for this project (Appendix) and sent by mail to patients who were either just about to undergo surgery (n = 95), or at 1 (n = 94), 5 (n = 95), or 10 years after surgery (n = 95). We asked patients about their views on the benefits and harms associated with the operation and living their daily life with the prosthesis, and which they perceived as most important. The items of the questionnaire were chosen based on published literature, surgeons’ input (n = 7), observations of patients attending a pre-operative course on THA, and one-to-one interviews with eight patients in December 2019 and January 2020. Ethics approval was obtained for use of the questionnaire within the registry and this project (N° CER PB_2017–00164). Registry data analysis Outcomes of interest were selected based on survey participants’ responses and grouped into mutually exclusive categories. These were later mapped to specific questions included in GAR that could be used to measure outcomes in our patient population. Patients who underwent a primary elective THA between March 1996 and December 2019 were included in the analysis. The end of follow-up was December 31, 2020. We excluded participants who had a large head (>28 mm), metal-on-metal bearing or a bilateral operation on the same day. Conditional Inference Tree (CIT) analysis was used to construct classification algorithms based on pre-operative characteristics that identify clusters of patients with homogeneous outcomes. This analytical approach was used to differentiate between profiles of patients. Separate algorithms were developed for each relevant outcome and time point (i.e. at 1, 5, and 10 years after THA). Additionally, survival trees were generated using the classification and regression tree method to produce cluster-specific survival curves for outcomes reporting on clinical events that could happen at any time between registry follow-up points. CIT seeks to identify predictors that split the population into homogeneous subgroups (clusters) in terms of the variance in outcome. It does so by identifying variables that are of increasingly less importance to improving the classification until a point is reached when additional variables no longer have discriminatory power. The classification and regression tree method used to generate survival trees achieves the same objective but by identifying the pre-operative predictors that generate distinct survival curves. A classification method was chosen over prediction models because the latter produce coefficients that are generally difficult for patients to understand, whereas classification is a natural concept that can be more easily discussed with patients in a clinical consultation. In addition, prediction models normally generate a probability that an individual will experience a given event (e.g. a fracture) or an expected result for a given outcome (e.g. mean WOMAC score), which even if the model is highly accurate means that, from the perspective of the patient, they will have to both understand the concept of probability and assess the number for the information to be meaningfully informative. Many patients will not meet those conditions. The classification approach, on the other hand, presents patients with a simpler view about how other people like them did (e.g. how many had fractures and how many didn’t, or the full distribution of their WOMAC scores) which is likely to add valuable information for patients. Candidate predictors were selected from the GAR based on clinical input and evidence reported in published literature . To mitigate the impact of missing data on results, imputation methods were used to predict values for both outcomes and predictors. To assess internal validity, 1000 bootstrap samples of equal size to the original sample were generated and the classification analysis undertaken again for each. As the classification method does not produce predicted values, performance of the models cannot be assessed by comparing predicted to observed. Instead, the bootstrap method allows to evaluate whether a different make-up of the sample would lead to different classification trees, which was informative for internal validity. Statistically significant predictors from the primary analysis were hence compared to the frequency of predictors identified as statistically significant in the 1000 bootstrapped CITs. The number of terminal nodes in each tree of the primary analysis was also compared to the average number of terminal nodes across the corresponding bootstrapped trees. This was done for each outcome and corresponding period of analysis. Tool creation and pre-testing The tool “Patients like me” was designed by a graphic designer, along with tailored feedback from the team, around the topics most relevant to patients and based on the findings from the primary analysis. The pre-test of the information tool was conducted from January to March 2022 by the sociologist with patients participating either at the pre-operative education session or the post-operative follow-up consultations. Sixteen patients agreed to participate in the pre-testing (10 women, 6 men). They tested the information tool either online or on paper according to their personal preference. During the pre-test period, we modified and retested the tool twice.
Essential condition for the project was the existence of a dataset with the relevant predictor and outcome information over the long-term. The different steps undertaken for the tool creation are summarized in . The Division of Orthopaedics at the Geneva University Hospitals established an institutional arthroplasty registry (GAR) in 1996. The institution is a large tertiary public hospital in a high-income country with universal health care coverage. The registry continuously and systematically collects detailed information about patients’ characteristics before surgery, about the surgery itself, and about the patients’ short-, mid-, and long-term experiences with their hip prosthesis. These data were the basis for the information tool. The registry has been described in detail elsewhere .
To capture patients’ interests, needs, and concerns, we contacted 379 randomly selected patients from the registry either just about to undergo or having already undergone primary elective THA. Patients were invited to participate in a survey from 13 February to 17 September 2020. The questionnaire was specifically developed for this project (Appendix) and sent by mail to patients who were either just about to undergo surgery (n = 95), or at 1 (n = 94), 5 (n = 95), or 10 years after surgery (n = 95). We asked patients about their views on the benefits and harms associated with the operation and living their daily life with the prosthesis, and which they perceived as most important. The items of the questionnaire were chosen based on published literature, surgeons’ input (n = 7), observations of patients attending a pre-operative course on THA, and one-to-one interviews with eight patients in December 2019 and January 2020. Ethics approval was obtained for use of the questionnaire within the registry and this project (N° CER PB_2017–00164).
Outcomes of interest were selected based on survey participants’ responses and grouped into mutually exclusive categories. These were later mapped to specific questions included in GAR that could be used to measure outcomes in our patient population. Patients who underwent a primary elective THA between March 1996 and December 2019 were included in the analysis. The end of follow-up was December 31, 2020. We excluded participants who had a large head (>28 mm), metal-on-metal bearing or a bilateral operation on the same day. Conditional Inference Tree (CIT) analysis was used to construct classification algorithms based on pre-operative characteristics that identify clusters of patients with homogeneous outcomes. This analytical approach was used to differentiate between profiles of patients. Separate algorithms were developed for each relevant outcome and time point (i.e. at 1, 5, and 10 years after THA). Additionally, survival trees were generated using the classification and regression tree method to produce cluster-specific survival curves for outcomes reporting on clinical events that could happen at any time between registry follow-up points. CIT seeks to identify predictors that split the population into homogeneous subgroups (clusters) in terms of the variance in outcome. It does so by identifying variables that are of increasingly less importance to improving the classification until a point is reached when additional variables no longer have discriminatory power. The classification and regression tree method used to generate survival trees achieves the same objective but by identifying the pre-operative predictors that generate distinct survival curves. A classification method was chosen over prediction models because the latter produce coefficients that are generally difficult for patients to understand, whereas classification is a natural concept that can be more easily discussed with patients in a clinical consultation. In addition, prediction models normally generate a probability that an individual will experience a given event (e.g. a fracture) or an expected result for a given outcome (e.g. mean WOMAC score), which even if the model is highly accurate means that, from the perspective of the patient, they will have to both understand the concept of probability and assess the number for the information to be meaningfully informative. Many patients will not meet those conditions. The classification approach, on the other hand, presents patients with a simpler view about how other people like them did (e.g. how many had fractures and how many didn’t, or the full distribution of their WOMAC scores) which is likely to add valuable information for patients. Candidate predictors were selected from the GAR based on clinical input and evidence reported in published literature . To mitigate the impact of missing data on results, imputation methods were used to predict values for both outcomes and predictors. To assess internal validity, 1000 bootstrap samples of equal size to the original sample were generated and the classification analysis undertaken again for each. As the classification method does not produce predicted values, performance of the models cannot be assessed by comparing predicted to observed. Instead, the bootstrap method allows to evaluate whether a different make-up of the sample would lead to different classification trees, which was informative for internal validity. Statistically significant predictors from the primary analysis were hence compared to the frequency of predictors identified as statistically significant in the 1000 bootstrapped CITs. The number of terminal nodes in each tree of the primary analysis was also compared to the average number of terminal nodes across the corresponding bootstrapped trees. This was done for each outcome and corresponding period of analysis.
The tool “Patients like me” was designed by a graphic designer, along with tailored feedback from the team, around the topics most relevant to patients and based on the findings from the primary analysis. The pre-test of the information tool was conducted from January to March 2022 by the sociologist with patients participating either at the pre-operative education session or the post-operative follow-up consultations. Sixteen patients agreed to participate in the pre-testing (10 women, 6 men). They tested the information tool either online or on paper according to their personal preference. During the pre-test period, we modified and retested the tool twice.
Survey results Of the 379 posted questionnaires, 275 were returned complete (72.6%) and 37 were incomplete (9.8%). Among the patients who returned the survey complete, 54.2% were women. Participants’ mean age was 70 years (±11.3, range 36–95). Educational achievement of patients was mandatory school 32.0%, secondary level 31.2%, and tertiary level 36.8% (missing n = 22). Among the patients who responded, 30.9% were soon to have their surgery (N = 85), 21.1% were at 1 year postoperative (n = 58), 29.1% at 5 years (n = 80), and 18.9% at 10 years after surgery (n = 52). Benefits perceived as most important (≥80%) included: pain relief (92.1%), independence in walking and moving (90.3%), and return to daily activities at home (85.1%), ( ). Respondents were also asked to order the three most important benefits. The first most important benefit was pain relief (78%). For the second and third, no one single benefit was selected but a variety of them: returning to my leisure activities (31%), returning to my daily activities at home (17%), and independence in walking and moving (17%) for the second choice; and returning to my leisure activities (18%), sleep again (17%), and stopping or reducing pain medication (13%) for the third choice. When stratifying the responses by time to surgery results were similar compared to those at all time points combined. The harms perceived as most important (≥80%) by the patients in the survey included: infection of my prosthesis (83%), persistent pain (81%), fracture of the bone surrounding my prosthesis (81%), loss of control over my health (80%), fracture of my prosthesis (80%), and dislocation of my prosthesis (80%) ( ). Patients were asked to order the three most important harms. No single one was selected for the first, second and third most important harms. However, a variety were selected: for the first most important, pain that spreads to other joints (42%) followed by fracture of the bone surrounding my prosthesis (12%); for the second most important, fracture of the bone surrounding my prosthesis (23%), inability to resume all my activities (13%), persistent pain (11%); and for the third most important, difference in length (13%), inability to resume all my activities (12%), fracture of the bone surrounding my prosthesis (11%). Of note, fracture of the bone surrounding my prosthesis was on top of the list across the three most important risks. When stratifying by time to surgery, all harms were mostly rated as important or very important by all groups of patients. Some results were similar compared to patients taken altogether. The risk of early change of my prosthesis due to wear or loosening was important across all patients’ groups and ranged between 81% (before surgery) and 69% (at 1 and 5 years). More generally, patients’ answers indicated pain relief, activity improvement, complications, and what to expect in the future as most important topics. Their views on what mattered to them remained consistent from before surgery to 10 years after surgery. Data analyses results The most important benefits and harms of THA reported by patients were covered by one or more data items included in the GAR. The Western Ontario and McMaster Universities Arthritis Index (WOMAC) captures patients’ experience of pain and activity. The Short-Form 12 (SF-12), a patient-reported outcome measure assessing the impact of health on everyday life, includes questions about pain, independence, and interference. The UCLA Activity Scale (UCLA) assesses physical activity and includes a question about return to work and usual activities. The GAR also collect specific questions about pain medication as well as records of whether the patient experienced infection, fracture, dislocation, or a revision of their prosthesis. All benefits and harms were grouped into four outcome categories: pain, activity, complications, and expectations. details the specific outcomes of interest as well as the corresponding questions/variable and the source used to measure them. Detailed results of the classification analysis for each outcome category are to be fully reported in upcoming manuscripts. A total of 6,836 operations were included in the CIT analysis. Characteristics of the patients are reported in . The final sample had more women (56.8%) than men and mean age was 68.9 (±12.2) years. Indication for surgery was mostly primary osteoarthritis (82%, n = 5,610). Mean follow-up was 8.5 years (±5.7, range 0–24). Overall, 2,122 (31%) patients died between 1996 and 2020 and 347 (5.1%) were lost to follow-up. The CIT analysis was applied to all Pain and Activity outcomes at 1, 5, and 10 years after THA, as well as to all Complications and Expectations outcomes 1 year after surgery. Further, survival trees were generated for all Complications and Expectations outcomes up to 20 years after THA. Candidate predictors were identified for each outcome; they included clinical and demographic variables such as age, sex, body mass index (BMI), comorbidity count, previous hip surgeries, underlying diagnosis, symptom duration, American Society of Anaesthesiologists (ASA) grade, Charnley disability grade, smoking status, and whether participants had public or private health insurance, for example. Other predictors included outcome variables measured at baseline such as specific WOMAC questions and overall scores, Harris pain and function scores, and specific SF-12 questions including self-rated health and the composite physical and mental component scores. Missing data were more common with longer follow-up time. Reasons for not completing follow-up questionnaires included death, moving away from Switzerland, refusal to participate, and poor general health. Imputation methods were applied to predict values for missing data. There was no missing information on ASA grade, diagnosis, or insurance status. Findings of the primary analysis are to be reported in figures of the resulting regression trees showing statistically significant predictors and threshold values, each generating a new branch into other predictors and thresholds, if relevant, until all cluster for the set outcome and time point were shown. Frequency distribution of the outcome variable will be included for each cluster. Outputs and testing Three outputs were produced: (1) a 28-page information leaflet (web and print version) (Figs – ) for the patient intended for use during the patient-surgeon consultation at the time of the decision whether to operate or not, and prior to surgery as well as after surgery. The patient can read the leaflet on its own or discuss it with others. The tool can be used as support in a preoperative education session; (2) a digital visualisation for the surgeon (integrated in the registry) illustrating the patient’s profile and corresponding trajectories for all outcomes of past patients like her/him intended to complement the preoperative planning strategy; and (3) an 8-page infographic brochure summarizing the project’s approach, methods, results and applications intended to inform clinicians, researchers, and other health professionals from different specialties and institutions. Pre-testing of the 28-page patient information leaflet showed that the feedback was positive across all patients. Regarding the content in the leaflet (information), patients appreciated the information, which was seen as interesting, clear and complete. The amount of information seemed a bit too much for few patients. Some appreciated that the information in the leaflet was complementary to that received from the surgeon during the patient-surgeon consultation, particularly on the issue of post-operative pain. First, the different feedbacks highlighted the need to clarify information on pain medication (i.e., that about 20% of patients report taking pain killers one year after surgery). A modified version containing more detail on pain medication was tested and there was no further criticism on this point. Regarding the form of the leaflet (colours, fonts, etc.), the overall impression was pleasant for all patients. The design and the colour were appreciated by most. A few patients suggested increasing the size of the smallest fonts, which was done. Pre-testing was undertaken either on a tablet or on the paper version if desired. Most patients were interested in taking home a paper version of the information tool.
Of the 379 posted questionnaires, 275 were returned complete (72.6%) and 37 were incomplete (9.8%). Among the patients who returned the survey complete, 54.2% were women. Participants’ mean age was 70 years (±11.3, range 36–95). Educational achievement of patients was mandatory school 32.0%, secondary level 31.2%, and tertiary level 36.8% (missing n = 22). Among the patients who responded, 30.9% were soon to have their surgery (N = 85), 21.1% were at 1 year postoperative (n = 58), 29.1% at 5 years (n = 80), and 18.9% at 10 years after surgery (n = 52). Benefits perceived as most important (≥80%) included: pain relief (92.1%), independence in walking and moving (90.3%), and return to daily activities at home (85.1%), ( ). Respondents were also asked to order the three most important benefits. The first most important benefit was pain relief (78%). For the second and third, no one single benefit was selected but a variety of them: returning to my leisure activities (31%), returning to my daily activities at home (17%), and independence in walking and moving (17%) for the second choice; and returning to my leisure activities (18%), sleep again (17%), and stopping or reducing pain medication (13%) for the third choice. When stratifying the responses by time to surgery results were similar compared to those at all time points combined. The harms perceived as most important (≥80%) by the patients in the survey included: infection of my prosthesis (83%), persistent pain (81%), fracture of the bone surrounding my prosthesis (81%), loss of control over my health (80%), fracture of my prosthesis (80%), and dislocation of my prosthesis (80%) ( ). Patients were asked to order the three most important harms. No single one was selected for the first, second and third most important harms. However, a variety were selected: for the first most important, pain that spreads to other joints (42%) followed by fracture of the bone surrounding my prosthesis (12%); for the second most important, fracture of the bone surrounding my prosthesis (23%), inability to resume all my activities (13%), persistent pain (11%); and for the third most important, difference in length (13%), inability to resume all my activities (12%), fracture of the bone surrounding my prosthesis (11%). Of note, fracture of the bone surrounding my prosthesis was on top of the list across the three most important risks. When stratifying by time to surgery, all harms were mostly rated as important or very important by all groups of patients. Some results were similar compared to patients taken altogether. The risk of early change of my prosthesis due to wear or loosening was important across all patients’ groups and ranged between 81% (before surgery) and 69% (at 1 and 5 years). More generally, patients’ answers indicated pain relief, activity improvement, complications, and what to expect in the future as most important topics. Their views on what mattered to them remained consistent from before surgery to 10 years after surgery.
The most important benefits and harms of THA reported by patients were covered by one or more data items included in the GAR. The Western Ontario and McMaster Universities Arthritis Index (WOMAC) captures patients’ experience of pain and activity. The Short-Form 12 (SF-12), a patient-reported outcome measure assessing the impact of health on everyday life, includes questions about pain, independence, and interference. The UCLA Activity Scale (UCLA) assesses physical activity and includes a question about return to work and usual activities. The GAR also collect specific questions about pain medication as well as records of whether the patient experienced infection, fracture, dislocation, or a revision of their prosthesis. All benefits and harms were grouped into four outcome categories: pain, activity, complications, and expectations. details the specific outcomes of interest as well as the corresponding questions/variable and the source used to measure them. Detailed results of the classification analysis for each outcome category are to be fully reported in upcoming manuscripts. A total of 6,836 operations were included in the CIT analysis. Characteristics of the patients are reported in . The final sample had more women (56.8%) than men and mean age was 68.9 (±12.2) years. Indication for surgery was mostly primary osteoarthritis (82%, n = 5,610). Mean follow-up was 8.5 years (±5.7, range 0–24). Overall, 2,122 (31%) patients died between 1996 and 2020 and 347 (5.1%) were lost to follow-up. The CIT analysis was applied to all Pain and Activity outcomes at 1, 5, and 10 years after THA, as well as to all Complications and Expectations outcomes 1 year after surgery. Further, survival trees were generated for all Complications and Expectations outcomes up to 20 years after THA. Candidate predictors were identified for each outcome; they included clinical and demographic variables such as age, sex, body mass index (BMI), comorbidity count, previous hip surgeries, underlying diagnosis, symptom duration, American Society of Anaesthesiologists (ASA) grade, Charnley disability grade, smoking status, and whether participants had public or private health insurance, for example. Other predictors included outcome variables measured at baseline such as specific WOMAC questions and overall scores, Harris pain and function scores, and specific SF-12 questions including self-rated health and the composite physical and mental component scores. Missing data were more common with longer follow-up time. Reasons for not completing follow-up questionnaires included death, moving away from Switzerland, refusal to participate, and poor general health. Imputation methods were applied to predict values for missing data. There was no missing information on ASA grade, diagnosis, or insurance status. Findings of the primary analysis are to be reported in figures of the resulting regression trees showing statistically significant predictors and threshold values, each generating a new branch into other predictors and thresholds, if relevant, until all cluster for the set outcome and time point were shown. Frequency distribution of the outcome variable will be included for each cluster.
Three outputs were produced: (1) a 28-page information leaflet (web and print version) (Figs – ) for the patient intended for use during the patient-surgeon consultation at the time of the decision whether to operate or not, and prior to surgery as well as after surgery. The patient can read the leaflet on its own or discuss it with others. The tool can be used as support in a preoperative education session; (2) a digital visualisation for the surgeon (integrated in the registry) illustrating the patient’s profile and corresponding trajectories for all outcomes of past patients like her/him intended to complement the preoperative planning strategy; and (3) an 8-page infographic brochure summarizing the project’s approach, methods, results and applications intended to inform clinicians, researchers, and other health professionals from different specialties and institutions. Pre-testing of the 28-page patient information leaflet showed that the feedback was positive across all patients. Regarding the content in the leaflet (information), patients appreciated the information, which was seen as interesting, clear and complete. The amount of information seemed a bit too much for few patients. Some appreciated that the information in the leaflet was complementary to that received from the surgeon during the patient-surgeon consultation, particularly on the issue of post-operative pain. First, the different feedbacks highlighted the need to clarify information on pain medication (i.e., that about 20% of patients report taking pain killers one year after surgery). A modified version containing more detail on pain medication was tested and there was no further criticism on this point. Regarding the form of the leaflet (colours, fonts, etc.), the overall impression was pleasant for all patients. The design and the colour were appreciated by most. A few patients suggested increasing the size of the smallest fonts, which was done. Pre-testing was undertaken either on a tablet or on the paper version if desired. Most patients were interested in taking home a paper version of the information tool.
In this project we have developed a comprehensive tool for patients and clinicians to discuss the entire care process of total hip replacement from prior to surgery to 20 years after surgery and to inform on multiple benefits and risks perceived as important by the patients. The information is tailored to groups (average outcomes) as well as to individual patients (outcomes provided by specific patient profiles). This project is an example of integration of registry-derived information into the clinical care process, and it highlights the importance/potential of clinical registries or databases in the learning health system to improve quality of care . Past patients’ experiences were made accessible through the systematic documentation of the registry. The latter is an established, well-documented data collection infrastructure . The pre-test patients’ feedback to the tool was unanimously positive. They considered it interesting, clear, complete, and complementary to other information received. The material has been created for various circumstances and on different forms (web based, printed, integrated in the workflow via the registry). The implementation is immediate and inexpensive and has the potential to change the patient’s and clinician’s experience. We have shown here the case of THA, but using past patients’ experiences to inform clinical decision making and follow-up is obviously not limited to joint replacement. Previous studies have used large datasets/registries to produce individualized predictions and inform patients e.g. about the likely quality of life benefit of surgery , or the risk of complications such as short-term revision and death , or both . However, to our knowledge this is the first study that leveraged registry data to develop a comprehensive (meaning multiple harms and patient-reported benefits over the long-term) tool that allows to match a patient today to others (“Patients like me”) who had a THA in the past. It is also novel in that it uses clustering methods (via regression trees in our analysis) instead of clinical prediction or prognostic models to generate information about what surgery might bring about for patients and their clinicians. Prediction models can be and are indeed useful to guide clinical practice, but from the perspective of patients they are likely much less intuitively informative. Prediction models would result in an estimation of the likelihood that a specific patient experiences a particular outcome, commonly explained as a given number in 100 people having such fate , or more complicated yet as an odds ratio. These concepts are not straightforward or easy to understand for most. Although in our study we use regression models, we use them as a vehicle to identify variables to create clusters, which we can later match to the patient who is about to have surgery. By doing this, we can present prospective patients with information about 100 people like them, of whom not a predicted but a known number would have experienced the outcome of interest or those they want to avoid. We hypothesized that patients would understand it more easily, and their feedback suggests that they do. With this information easily accessible, patients can now be empowered to have more meaningful discussions with their clinicians, and ultimately improve the quality of care by improving the experience of shared decision-making and follow-up care. Other initiatives have been launched recently with a similar aim of using routinely collected data from registries to inform personalised treatment. The DESTINY platform used by neurologists and psychiatrists in Germany helps to identify side effects or interactions of treatments, also collects patient data to build algorithms that can be used to predict treatment response . Although predictions are not provided by patient clusters, the concept of using registry data to inform patient treatment follows a similar aim as our study. The methods we employed were not without limitations. First, some variables reported high levels of missing data, which is a common problem of longitudinal datasets especially for patient-reported variables. To address this, we applied imputation methods, but these add some level of uncertainty to results. Also, when identifying clusters, the conditional tree approach we used finds thresholds in predictors using significance tests, which cannot be modified if greater levels of heterogeneity in the final clusters were preferable. This could be achieved using methods such as classification and regression tree (CART) analysis, however we opted for the conditional tree approach because it avoids the analyst having to decide on a number of parameters about the size and purity of nodes in the resulting tree, as well as having to manually prune it in the end, which can add significant bias. The tool was derived from an institutional registry and the results may not be generalizable to other settings. However, baseline characteristics of the patients included in the registry are comparable to those of other national hip arthroplasty registries . Although total hip replacement is already an established and highly successful intervention, there have been modifications in surgical practice—including implant selection and surgical techniques used–and minor changes in patient baseline status over the 20-year follow-up period. Consequently, the results from past patients presented in the information tool may not be exactly the same for today’s patients. To explore this, further analyses must be conducted with data from more recent years that will scarify length of follow-up but will allow examining the potential impacts of changes in practice.
The information tool based on a survey of patients’ perceived concerns and interests and the corresponding long-term data from a large institutional registry makes past patients’ experience accessible, understandable, and visible for today’s patients and their clinicians. It provides information that is useful for them during the decision for surgery, and it offers profile-specific patient experience for meaningful discussions and expectation management perioperative and over the long-term. Finally, it provides tailored information complementing the surgeon’s preoperative planning strategy. Potentially complementing prediction models, the tool is a comprehensive illustration of trajectories of relevant outcomes up to 20 years after THA from previous “Patients like me”.
S1 File (PDF) S2 File (PDF)
|
Metabolic profiling and gene expression analyses shed light on the cold adaptation mechanisms of | 12f95b04-8a8a-4512-8dfd-facb3efc853d | 11868412 | Biochemistry[mh] | Northeast China encompasses a land area of 1.45 million square kilometres and is characterized by long winters, thus making it the coldest environment in China . The markedly low temperatures, coupled with substantial snowfall and ice formation, present significant constraints on agricultural growing seasons . The average annual air temperatures in the three observed provinces range from − 2 to + 3 °C, during winter daytime hours they can plummet as far as − 34 to − 30 °C. In most parts of Northeast China, the frost-free season does not exceed 160 days . Multiple studies have demonstrated that cold stress poses diverse challenges to plants such as ion imbalances, osmotic stress, oxidative damage, compromised photosynthetic efficiency and cellular membrane rigidity along with disruptions in metabolic processes , . Therefore, developing resistance to low temperatures is a crucial mechanism for plants’ ability to withstand cold conditions especially among perennial herbs , . Plants are particularly susceptible to significant temperature fluctuations due to large-scale atmospheric variations and the occurrence of extreme weather events . For responding to negative abiotic-stress, sessile plants need to perceive stress signals and respond to the generation of harmful reactive oxygen species (ROS). For counteracting detrimental results caused by excessive ROS, such as membrane rigidity damage, protein inactivation, and DNA breakage, plants have evolved various adaptive strategies that involve enzymatic scavengers (such as ascorbate peroxidase [APX], peroxidase [POD], superoxide dismutase [SOD], and catalase [CAT]), as same as nonenzymatic components, including proline, sugars, amino acids, and secondary metabolites . The transcription factor PtrbHLH has been demonstrated to combine with the E-box motif belonging to the promoter region of POD gene sequence and activate POD activity for H 2 O 2 scavenging. This activation enhances cold tolerance under freezing or chilling temperatures in tobacco and lemon plants . The heterologous expression research of StCBF1 and StCBF4 enhanced cold tolerance in Arabidopsis to varying degrees. Furthermore, increased function of antioxidant enzymes was observed along with reduced ROS accumulation . Soluble sugars (SSs) play dual roles as signalling molecules regulating various stress-related genes took part in sucrose metabolism, photosynthesis pathway and osmolyte biosynthesis under abiotic stress conditions . Flavonoids and terpenoids are vital secondary metabolites that are known for their significant ROS scavenging activities. They have been directly associated with coping mechanisms against adverse climatic stresses , . Additionally, the abscisic acid (ABA)-dependent pathway has been shown to participate in plant regulation during plant responses. The transcript level analysis of the NAC transcription factor ( OsNAP ) showed that it was significantly reduced by ABA and abiotic stresses, and the enhanced expression resulted in notably increment resistance to drought, low temperature and high salinity based on an ABA-mediated network . Saposhnikovia divaricate (Turcz.) Schischk (SD) is a perennial medicinal plant attached to the Apiaceae family that is extensively found in the northeastern and northern provinces of China due to its significant medicinal, nutritional, and economic value. It has a substantial domestic and international market and has served as a traditional Chinese medicine for several years . SD is recognized as a top-grade medicinal plant by the Chinese Pharmacopoeia Commission and is commonly employed for treating rheumatism, stroke, fever, cold and arthralgia . Extensive research on the roots and leaves of SD has identified numerous bioactive components, including polysaccharides, flavonoids, coumarins, volatile oils, lignins, and organic acids. These compounds have demonstrated antipyretic properties along with anti-inflammatory effects against hepatitis detoxification processes. These compounds also exhibit antiallergic activity while being effective against influenza-induced inflammation according to recent pharmacological investigations – . The literature suggests that genetic background and environmental factors significantly influence the presence of bioactive compounds. Therefore, it has become crucial (yet difficult) to elucidate the response mechanisms of key metabolites under different climatic conditions. Currently, high-throughput omics application, for example genomics, metabolomics, transcriptomics, and proteomics, are effectively utilized to investigate the response mechanisms of plants under various abiotic stresses and elucidate crucial patterns of gene expression and metabolite accumulation , . Recently, RNA-seq has been utilized to dissect the mechanisms underlying low-temperature tolerance by examining gene expression and the regulation of target metabolites. Dynamic profiling of the transcriptome has identified hub-genes and pathways involved in low temperature stress in castor , faba bean , and Fragaria nilgerrensis . Furthermore, studies have investigated complex mechanisms involving gene coexpression network analysis and the detection of stress tolerance genes linked to candidate metabolites involved in salt, water/drought, and heat responses – . Overall, integrated research combining transcriptomic and metabolomic approaches has made substantial progress in enhancing our understanding of intricate regulatory networks operating under stressful conditions. However, there is a need of comprehensive comparative studies on the molecular mechanisms involved in SD during cold stress. Therefore, an integrated assay encompassing transcriptome and metabolome analyses should be considered a highly effective approach for elucidating pivotal genes and metabolites involved in responsive molecular mechanisms under cold stress. In this research, transcriptomic and metabolomic analyses of SD have been performed to determine the response mechanisms of gene expression and biosynthesis of key bioactive metabolites under different cold stress treatments. Integrated data analysis of the transcriptome and metabolome revealed significant upregulation or downregulation of numerous genes and metabolites. Notably, SS has emerged as one of the most significantly altered bioactive compounds. Intriguingly, compared with short-term stress, long-term cold stress treatment resulted in greater increases in the SS content. Furthermore, certain transcription factors, such as MYB, may play an important role in providing cold tolerance. These findings provide a comprehensive understanding of how genes and substances respond to cold stress in SD, thereby contributing valuable insights for plantation practices and breeding programs targeting highly commercial metabolite accessions.
Plant sample gathering and cold stress treatment Saposhnikovia divaricata (Turcz.) Schischk. seeds were collected from Tahe County, which is located in the Greater Khingan Mountains region of China, and were identified by Ma Dezhi, deputy director of Qiqihar Medical University. The plant collection and use was in accordance with all the relevant guidelines, the plant samples were deposited in Qiqihar Medical University Chinese medicinal herbarium (No. QMU172024935). This region employ a cold mild continental monsoon climate characterized by a typical yearly temperature of 0 °C, a mean rainfall of 460 mm, and approximately 2600 h of annual sunshine. Professor Ma Wei from Heilongjiang University of Chinese Medicine verified the authenticity of the plant material. The seeds were cultivated in a controlled environment in a controlled setting at Qiqihar Medical University. Following germination, seeds were planted for 48 days under light–dark alternating cycles of 14 h at 24 °C and 10 h at 20 °C while maintaining 55% humidity and a light intensity of 12,000 lx. To induce cold stress, 48 d-old seedlings were subjected to 4 °C for different durations: 6 h, 12 h, 24 h, 48 h, and a final temperature of up to 72 h. Afterwards, the leaf samples were promptly excised, frozen in liquid nitrogen, then stored at -80 °C for later analyses. These samples served as both transcriptome sequencing and LC–MS/MS analysis materials. Additionally, a control group consisting of seedlings that did not undergo any cold stress treatment was established. For each treatment condition, the biological replicates comprised three uniformly growing plant groups. Physiological and biochemical features of SD under cold stress The relative conductivity of the SD seedlings during cold stress was assessed following methods described in the previous literature . The biochemical characteristics of the seedlings were checked by reagent kits from Nanjing Jiancheng Bioengineering Institute (Nanjing, China). The malondialdehyde (MDA) content was analyzed through a thiobarbituric acid assay (product no. A003-3-1). Phenylalanine ammonia lyase (PAL), which is a key enzyme in phenylpropanoid metabolism, was quantified by using spectrophotometry (product no. A137-1-1). The soluble sugar content was measured through anthrone colorimetry (product no. A145-1-1). Peroxidase (POD) activity was assessed by monitoring absorbance changes at 420 nm (product no. A084-3-1). Catalase (CAT) function was measured by utilizing the colorimetric method with ammonium molybdate (product no. A007-1-1). Proline (PRO) levels were assessed through a response contacted with ninhydrin (product no. A107-1-1). Total antioxidant capacity (T-AOC) was gauged by through a colorimetric method (Product No. A105-1). Superoxide dismutase (SOD) activity was analyzed by using a xanthine-xanthine oxidase-nitro blue tetrazolium assay (product no. A001-1-2). All of the experiments were conducted in triplicate to ensure reliability. Untargeted metabolic profiling For metabolite analysis, samples from the cold stress groups (Cold6h and Cold48h) and the control group (CK) were prepared. Metabolite extraction, as well as qualitative and quantitative analyses, were conducted following previously established methods . The plant samples were freeze-dried, pulverized into a fine powder, and then dissolved in aqueous methanol (70%) at a low temperature (4 °C) overnight. The resulting solutions were analysed by using ultrahigh-performance LC–MS/MS method with a Vanquish UHPLC system coupled to an Orbitrap Q Exactive™ HF mass spectrometer. The metabolites were subjected to orthographic projection of primary component analysis (PCA) and orthogonal partial least squares discriminant analysis (OPLS-DA) for potential structure identification. Metabolites exhibiting a |log 2 fold change (FC)|≥ 1 and a variable importance in projection (VIP) ≥ 1 were identified as being differentially expressed metabolites (DEMs) between the groups (Cold6h vs. CK, Cold48h vs. CK). An enrichment analysis of these DEMs was subsequently performed by based on the KEGG database. RNA extraction and Illumina sequencing Transcriptome sequencing of SD seedling leaves was performed by Novogene Corporation, Inc. Total RNA was obtained based on the TRIzol procedure from samples collected at three time points, including 0 h (CK), 6 h (Cold6h), and 48 h (Cold48h), each under cold stress and with three biological replicates. The integrity, concentration, and purity of these RNA samples were rigorously verified to guarantee high quality. After data filtration, high-quality clean reads were obtained. Transcript assembly was conducted by using Trinity version 2.11.0. For differential expression analysis, DESeq2 software was used, and the screening criteria for DEGs were an absolute |log 2 fold change (FC)|≥ 1 and a false discovery rate (FDR) < 0.05. WGCNA and gene network visualization The weighted gene coexpression network analysis (WGCNA) method was utilized for constructing a coexpression network of the selected DEGs. The weighting coefficient β was determined based on the scale-free topology standard to maximize the correlation coefficient, which was then changed into a topological overlap matrix (TOM) according to β value. Subsequently, gene correlations were analysed, eigenvector genes for each module were calculated, and both total connectivity and intramodule connectivity were determined by using weighted correlation coefficients. Based on the clustering relationships among the different genes, DEGs were grouped into distinct modules, thus enabling investigation of the correlations between module eigengenes (MEs) and the physiological and biochemical features of SD at various time points under cold stress. The networks were visualized by using CYTOSCAPE software (v3.10.1, USA). Quantitative real-time RT-PCR analysis Quantitative real-time RT-PCR analysis (qRT-PCR) was utilized to verify the accuracy of the gene expression analysis results derived from the transcriptomic data. Total RNA was first isolated, followed by the preparation of cDNA. The qRT-PCR experiments were conducted through the BlazeTaq™ SYBR® Green qPCR Mix 2.0 Kit, utilizing GAPDH as the endogenous control gene. Relative expression degrees were estimated by using the 2 −ΔΔCt method, and every experiment included three technical replicates to ensure reliability. Statistical analysis The calculated informations are presented as the mean ± standard error of the mean (SEM). Statistical significance between different groups were assessed using Student’s t test, with * P < 0.05 and ** P < 0.01 demonstrating statistical significance. Correlation analyses were utilized by using the Pearson correlation coefficient, with screening criteria set at a correlation coefficient > 0.80 and a P < 0.05 for significance. DEGs and DEMs were annotated by using the KEGG database ( www.kegg.jp/kegg/kegg1.html) . Enrichment analysis was conducted with stringent filtering criteria, requiring a gene pathway P < 0.01 and a metabolic pathway P < 0.05 to decide significance.
Saposhnikovia divaricata (Turcz.) Schischk. seeds were collected from Tahe County, which is located in the Greater Khingan Mountains region of China, and were identified by Ma Dezhi, deputy director of Qiqihar Medical University. The plant collection and use was in accordance with all the relevant guidelines, the plant samples were deposited in Qiqihar Medical University Chinese medicinal herbarium (No. QMU172024935). This region employ a cold mild continental monsoon climate characterized by a typical yearly temperature of 0 °C, a mean rainfall of 460 mm, and approximately 2600 h of annual sunshine. Professor Ma Wei from Heilongjiang University of Chinese Medicine verified the authenticity of the plant material. The seeds were cultivated in a controlled environment in a controlled setting at Qiqihar Medical University. Following germination, seeds were planted for 48 days under light–dark alternating cycles of 14 h at 24 °C and 10 h at 20 °C while maintaining 55% humidity and a light intensity of 12,000 lx. To induce cold stress, 48 d-old seedlings were subjected to 4 °C for different durations: 6 h, 12 h, 24 h, 48 h, and a final temperature of up to 72 h. Afterwards, the leaf samples were promptly excised, frozen in liquid nitrogen, then stored at -80 °C for later analyses. These samples served as both transcriptome sequencing and LC–MS/MS analysis materials. Additionally, a control group consisting of seedlings that did not undergo any cold stress treatment was established. For each treatment condition, the biological replicates comprised three uniformly growing plant groups.
The relative conductivity of the SD seedlings during cold stress was assessed following methods described in the previous literature . The biochemical characteristics of the seedlings were checked by reagent kits from Nanjing Jiancheng Bioengineering Institute (Nanjing, China). The malondialdehyde (MDA) content was analyzed through a thiobarbituric acid assay (product no. A003-3-1). Phenylalanine ammonia lyase (PAL), which is a key enzyme in phenylpropanoid metabolism, was quantified by using spectrophotometry (product no. A137-1-1). The soluble sugar content was measured through anthrone colorimetry (product no. A145-1-1). Peroxidase (POD) activity was assessed by monitoring absorbance changes at 420 nm (product no. A084-3-1). Catalase (CAT) function was measured by utilizing the colorimetric method with ammonium molybdate (product no. A007-1-1). Proline (PRO) levels were assessed through a response contacted with ninhydrin (product no. A107-1-1). Total antioxidant capacity (T-AOC) was gauged by through a colorimetric method (Product No. A105-1). Superoxide dismutase (SOD) activity was analyzed by using a xanthine-xanthine oxidase-nitro blue tetrazolium assay (product no. A001-1-2). All of the experiments were conducted in triplicate to ensure reliability.
For metabolite analysis, samples from the cold stress groups (Cold6h and Cold48h) and the control group (CK) were prepared. Metabolite extraction, as well as qualitative and quantitative analyses, were conducted following previously established methods . The plant samples were freeze-dried, pulverized into a fine powder, and then dissolved in aqueous methanol (70%) at a low temperature (4 °C) overnight. The resulting solutions were analysed by using ultrahigh-performance LC–MS/MS method with a Vanquish UHPLC system coupled to an Orbitrap Q Exactive™ HF mass spectrometer. The metabolites were subjected to orthographic projection of primary component analysis (PCA) and orthogonal partial least squares discriminant analysis (OPLS-DA) for potential structure identification. Metabolites exhibiting a |log 2 fold change (FC)|≥ 1 and a variable importance in projection (VIP) ≥ 1 were identified as being differentially expressed metabolites (DEMs) between the groups (Cold6h vs. CK, Cold48h vs. CK). An enrichment analysis of these DEMs was subsequently performed by based on the KEGG database.
Transcriptome sequencing of SD seedling leaves was performed by Novogene Corporation, Inc. Total RNA was obtained based on the TRIzol procedure from samples collected at three time points, including 0 h (CK), 6 h (Cold6h), and 48 h (Cold48h), each under cold stress and with three biological replicates. The integrity, concentration, and purity of these RNA samples were rigorously verified to guarantee high quality. After data filtration, high-quality clean reads were obtained. Transcript assembly was conducted by using Trinity version 2.11.0. For differential expression analysis, DESeq2 software was used, and the screening criteria for DEGs were an absolute |log 2 fold change (FC)|≥ 1 and a false discovery rate (FDR) < 0.05.
The weighted gene coexpression network analysis (WGCNA) method was utilized for constructing a coexpression network of the selected DEGs. The weighting coefficient β was determined based on the scale-free topology standard to maximize the correlation coefficient, which was then changed into a topological overlap matrix (TOM) according to β value. Subsequently, gene correlations were analysed, eigenvector genes for each module were calculated, and both total connectivity and intramodule connectivity were determined by using weighted correlation coefficients. Based on the clustering relationships among the different genes, DEGs were grouped into distinct modules, thus enabling investigation of the correlations between module eigengenes (MEs) and the physiological and biochemical features of SD at various time points under cold stress. The networks were visualized by using CYTOSCAPE software (v3.10.1, USA).
Quantitative real-time RT-PCR analysis (qRT-PCR) was utilized to verify the accuracy of the gene expression analysis results derived from the transcriptomic data. Total RNA was first isolated, followed by the preparation of cDNA. The qRT-PCR experiments were conducted through the BlazeTaq™ SYBR® Green qPCR Mix 2.0 Kit, utilizing GAPDH as the endogenous control gene. Relative expression degrees were estimated by using the 2 −ΔΔCt method, and every experiment included three technical replicates to ensure reliability.
The calculated informations are presented as the mean ± standard error of the mean (SEM). Statistical significance between different groups were assessed using Student’s t test, with * P < 0.05 and ** P < 0.01 demonstrating statistical significance. Correlation analyses were utilized by using the Pearson correlation coefficient, with screening criteria set at a correlation coefficient > 0.80 and a P < 0.05 for significance. DEGs and DEMs were annotated by using the KEGG database ( www.kegg.jp/kegg/kegg1.html) . Enrichment analysis was conducted with stringent filtering criteria, requiring a gene pathway P < 0.01 and a metabolic pathway P < 0.05 to decide significance.
Morphological and physiological changes and enzyme activity of SD under cold stress During cold stress, the leaves from the SD seedlings gradually wilt, and significant lodging of the plants is observed, as illustrated in Fig. A,B. To discover the physiological reactions of SD to cold stress, the biochemical experiments on samples subjected to both control and cold treatments at different time groups were conducted. Specifically, under stress, the MDA content increased significantly, showing a twofold increase at 12 h, a 2.33-fold increase at 48 h, and a 2.8-fold increase at 72 h, as indicated in Fig. C. Additionally, osmoregulatory substances such as proline, soluble sugar, and soluble protein also exhibited substantial increases. The proline levels rose markedly by 5.32-fold at 6 h and 3.66-fold at 48 h. The soluble sugar content reached its highest elevation at 72 h, with a 3.25-fold increase, as shown in Fig. D,E. Conversely, the soluble protein content initially dropped by 0.97-fold at 6 h but then rose significantly at 72 h, with a 1.8-fold increase (Fig. F). Among the antioxidants, CAT, SOD, and APX all exhibited significant increases in activity during cold stress. Notably, CAT displayed the greatest increase, with a 1.59-fold change at 72 h (Fig. G). SOD showed the greatest increase at 6 and 48 h, thus demonstrating fold changes of 2.05 and 2.83, respectively (Fig. H). APX exhibited the most pronounced increase at 24 h, with a fold change of 2.67 (Fig. J). Conversely, PAL activity initially increased by a factor of 1.44 at six hours but subsequently decreased significantly to its maximum decrease of -0.53-fold at 72 h (Fig. I). These findings collectively indicate a substantial accumulation in MDA content during cold environment, thus suggesting damage to the SD cell membrane structure along with severe lipid peroxidation within the membrane. Furthermore, there was a notable increase in the content of osmotic regulatory substances, as well as in the activity of antioxidant substances, which likely participated in the ability of SD to withstand cold stress.
During cold stress, the leaves from the SD seedlings gradually wilt, and significant lodging of the plants is observed, as illustrated in Fig. A,B. To discover the physiological reactions of SD to cold stress, the biochemical experiments on samples subjected to both control and cold treatments at different time groups were conducted. Specifically, under stress, the MDA content increased significantly, showing a twofold increase at 12 h, a 2.33-fold increase at 48 h, and a 2.8-fold increase at 72 h, as indicated in Fig. C. Additionally, osmoregulatory substances such as proline, soluble sugar, and soluble protein also exhibited substantial increases. The proline levels rose markedly by 5.32-fold at 6 h and 3.66-fold at 48 h. The soluble sugar content reached its highest elevation at 72 h, with a 3.25-fold increase, as shown in Fig. D,E. Conversely, the soluble protein content initially dropped by 0.97-fold at 6 h but then rose significantly at 72 h, with a 1.8-fold increase (Fig. F). Among the antioxidants, CAT, SOD, and APX all exhibited significant increases in activity during cold stress. Notably, CAT displayed the greatest increase, with a 1.59-fold change at 72 h (Fig. G). SOD showed the greatest increase at 6 and 48 h, thus demonstrating fold changes of 2.05 and 2.83, respectively (Fig. H). APX exhibited the most pronounced increase at 24 h, with a fold change of 2.67 (Fig. J). Conversely, PAL activity initially increased by a factor of 1.44 at six hours but subsequently decreased significantly to its maximum decrease of -0.53-fold at 72 h (Fig. I). These findings collectively indicate a substantial accumulation in MDA content during cold environment, thus suggesting damage to the SD cell membrane structure along with severe lipid peroxidation within the membrane. Furthermore, there was a notable increase in the content of osmotic regulatory substances, as well as in the activity of antioxidant substances, which likely participated in the ability of SD to withstand cold stress.
To explore metabolic altering as a result of cold stress, we performed untargeted metabolomic profiling and compared differentially expressed metabolites (DEMs) between the Cold6h vs. CK, Cold48h vs. CK, and Cold48h vs. Cold6h groups. In total, 1396 metabolites were detected, containing fatty acids, amino acids and derivatives; lysolecithin (LPE, LPC, and LPI); alkaloids; organic acids; flavonoids; lignins and coumarins; terpenoids; phenolic acids; and several other compounds (Supplementary Table ). PCA revealed significant changes among the different stress groups, with scores of 28.96% and 16.04%, respectively. Three distinct clusters were formed by three biological replicates within each group: CK, Cold6h, Cold48h, and QC (Fig. A). Venn diagrams illustrated the differences in DEMs among these groups. Esculetin and pimpinellin were shared among all three groups (Fig. B). By applying thresholds of |log 2 Fold Change|≥ 1, VIP ≥ 1 and (-log 10 P value) ≥ 1, a total of 138 DEMs were identified in the comparison between Cold6h and CK, whereas 85 DEMs were identified in the comparison between Cold48h and CK (Fig. C,D). These metabolites were mainly classified into 10 categories: lipids, organic acids, amino acids and derivatives, lignin and coumarins, phenolic acids, alkaloids, nucleotides and derivatives, flavonoids, terpenoids and others. There were 86 DEMs upregulated and 52 DEMs downregulated in the Cold6h vs. CK group (Fig. C and Supplementary Table ), whereas there were 59 DEMs upregulated and 26 downregulated in the Cold48h vs. CK group (Fig. D and Supplementary Table ). Notably, the upregulated metabolites were mainly saccharides, flavonoids and coumarins, thus implying that these compounds may facilitate the ability of SD to adapt to cold. Heatmap analysis revealed clear separation of these DEMs into three distinct groups, thus underscoring how cold stress significantly altered the metabolite profiles of SD (Fig. E). To elucidate the functions of the differentially expressed metabolites, we enriched the KEGG pathways based on the enrichment factor, P value, and number of enriched genes. The results revealed that in the comparison between Cold6h and CK, DEMs were predominantly enriched in five pathways: starch and sucrose metabolism, galactose metabolism, phenylalanine metabolism, pyruvate metabolism, valine, leucine, and isoleucine biosynthesis (Fig. A). In the comparison between Cold48h and CK, DEMs were mainly accumulated in five pathways: galactose metabolism, glycolysis/gluconeogenesis, starch and sucrose metabolism, ABC transporters, and nicotinate and nicotinamide metabolism (Fig. B). In plants, the transcriptional coactivator MBF1c is closely related to heat-induced protein and trehalose biosynthesis (trehalose phosphate synthase 5) under heat stress . Galactose metabolism is crucial for plant development, as are galactose biosynthesis, raffinose biosynthesis and lipid metabolism; moreover, it participates in salt, drought, osmotic, ABA and cold stress , . There is a significant correlation between amino acid metabolism and physiological reactions to abiotic stresses. Moreover, proline, glycine, leucine, and valine have been investigated in plants in reaction to abiotic stresses . The top 20 DEMs, ordered by |log 2 (fold change)| in Cold6h vs. CK, revealed that 6-formyl-isoophiopogonanone A was the most markedly altered metabolite, with a log 2 (fold change) of 4.88 (Fig. C). In the Cold48h vs. CK comparison, ethyl-β-D-glucuronide was the most markedly altered metabolite, displaying a log 2 (fold change) of 3.95 (Fig. D). The results indicated that there was sufficient upregulation of saccharides, alkaloids, flavonoids and coumarins in SD under cold stress. Transcriptomic response of SD under cold stress The transcriptional pattern of SD plants exposed to low temperature was assessed by using a quantitative transcriptome sequencing strategy on the Illumina NovaSeq 6000 Platform, contained three independent replicates per group (CK, Cold6h, and Cold48h). After removing the raw data reads containing adapters, N bases, and low-quality sequences, we secured 20.37–24.72 million clean reads. Subsequent to quality assembly, we generated 6.1 GB of clean reads with an impressive Q30 percentage > 91.58%. Gene expression levels were normalized by using TMM technology, and differentially expressed genes were determined based on following testing parameters: |log2-fold change|> 1 and P < 0.05. A total of 6024 DEGs (4100 upregulated and 2104 downregulated) in Cold6h vs. CK and 13,155 DEGs (6932 upregulated and 6223 downregulated) in Cold48h vs. CK were identified (Fig. A,B). Notably, a Venn diagram depicted the presence of 1415 shared DEGs across the Cold6h vs. CK, Cold48h vs. CK, and Cold48h vs. Cold6h comparisons (Fig. C and Supplementary Table ). These DEGs were verified to be correlated with sugar-related metabolite metabolism, ROS-scavenging and detoxification pathways, terpenoid biosynthesis and flavonoid biosynthesis pathways, plant hormone signal transduction, lipid metabolism, phenylpropanoid metabolism pathways (Supplementary Table ). The GO enrichment examination revealed that DEGs were classified into different ontologies: BPs (biological processes), CCs (cellular components) and MFs (molecular functions). The significant DEGs were mainly clustered into BPs and MFs. In the Cold6h vs. CK comparison, the DEGs were mainly associated with oxidoreductase activity (MF) and embryo development (BP) (Fig. A). In the Cold48h vs. CK comparison, the DEGs included cellular protein modification process, transmembrane transport, lipid metabolic process and carbohydrate process in the BP category. In addition, in the MF category, the DEGs participated in transferase activity, kinase activity, oxidoreductase activity, transcription factor (TF) activity, ion binding and transmembrane transporter activity (Fig. B). Afterwards, KEGG enrichment investigation was conducted based on DEGs information to examine influences of cold temperature on the enrichment pathways in SD. Several pathways, such as phenylpropanoid metabolism, plant hormone signal transduction, were significantly implicated under cold temperature. In addition, several other pathways were enriched, such as alpha-linolenic metabolism, flavonoid biosynthesis, the calcium signalling pathway, mineral absorption and alcoholism (Fig. C,D). Transcription factors (TFs) and weighted gene coexpression network analysis (WGCNA) The diversity in gene expression patterns ultimately impacts the response of SD to cold stress. We identified 2071 putative TFs from 90 distinct families, with the top ten TFs belonging to the AP2/ERF-ERF (122), C2H2 (106), MYB-related (100), NAC (92), others (88), C3H (88), bHLH (85), WRKY (80), bZIP (68), and MYB (59) families (Fig. A). We employed WGCNA to explore the relationships between physiological indices and important DEGs. In WGCNA, gene clusters exhibiting high correlation were defined as modules, wherein genes within each module displayed strong correlations. WGCNA identified eight distinct modules, which were distinguished by different colours (Fig. B). The correlation coefficients between several physiological characteristics and these eight gene cluster modules were analysed (Fig. C). Notably, the MEyellow module exhibited significant correlations with SOD activity, MDA levels, and soluble sugar content ( P < 0.05) (Fig. D). Based on the PPI analysis results, we identified twenty hub genes within this intriguing module (Fig. E). These hub genes included two PP2Cs , two GH3s , and two HSPs . Importantly, all twenty hub genes were upregulated in the Cold6h vs. CK and Cold48h vs. CK comparison groups, thus suggesting their potential association with SD resistance to cold environments. Cold stress significantly affected the dynamic changes in plant hormones regarding SD. The ‘plant hormone signal transduction’ pathway was markedly enriched after Cold6h and Cold48h stress, and the DEGs enriched in this pathway included PP2C-24 , PP2C-50 , GH3-6 , SAUR50 , and ETR2 , which are critical genes participating in the ABA, auxin and ET signal transduction pathways. The expression levels of those DEGs were significantly correlated with group Cold48h, indicating that they are key genes sensitive to cold treatment (Fig. F). Integrated metabolic and transcriptional assay exhibits key role of starch and sucrose metabolism pathways under cold stress To further explore the correlation between DEGs and DEMs of SD under cold stress, we executed a overall conetwork evaluation of the transcriptome and metabolome information. A Pearson correlation coefficient (PCC > 0.8) was utilized to construct a histogram based on the information derived from the DEGs and DEMs. The correlation analysis was visualized by using a nine-quadrant plot, wherein each quadrant represented distinct correlation scenarios between genes and metabolites. Quadrants 3 and 7 indicated positive correlations between DEGs and DEMs; moreover, quadrant 5 showed no significant correlation, whereas the remaining quadrants suggested negative correlations between DEGs and DEMs (Fig. A,B). Furthermore, KEGG enrichment analysis of DEGs and DEMs revealed consistent enrichment of starch and sucrose metabolism, ABC transporter, amino sugar and nucleotide sugar metabolism in the Cold48h vs. CK group ( P < 0.05), as well as enrichment of the phenylpropanoid biosynthesis pathway in Cold6h vs. CK group and enrichment of the plant hormone signal transduction pathway from Cold48h vs. Cold6h group. These findings highlighted the potential importance of starch/sucrose metabolism, as well as plant hormone signalling pathways, in conferring cold stress resistance to SD plants (Fig. C,D). Sugars and starches, as the primary carbohydrates within plants, serve both as sources of energy and as crucial substances for plant stress resistance. Sugars play multiple roles in cold adaptation, including acting as osmoprotectants to shield cells from freeze damage, serving as signalling molecules in the signal transduction pathways of the cold reaction, and modulating the antioxidant enzyme system to mitigate oxidative stress induced by low temperatures. Metabolomic analysis revealed significant accumulation of raffinose, sucrose, D-glucose-6P, β-D-fructose-6P and trehalose under cold stress (Fig. A,C). Moreover, gene expression analysis indicated positive regulation of various DEGs encoding enzymes participating in pathway of sugar metabolism , such as α-galactosidase ( α-Gal ), sucrose synthase ( SS ), sucrose phosphate synthase ( SPS ), invertase ( INV ), sugar cotransporter kinase ( ScrK ), trehalose-6-phosphate synthase ( TPS ), trehalase ( TREH ) and oligosaccharyltransferase B ( otsB ). qPCR analysis demonstrated the substantial regulation of several genes associated with sugar biosynthesis under cold stress, such as SdINV3 in the Cold6h group and SdSPS1/SdSS5 in the Cold48h group (Fig. B). In summary, our transcriptome and metabolome data suggest that sugar-related metabolic pathways, along with flavonoid/terpenoid biosynthesis and plant hormone signalling pathways, are primarily responsible for the resistance of SD to cold stress (Fig. ).
The transcriptional pattern of SD plants exposed to low temperature was assessed by using a quantitative transcriptome sequencing strategy on the Illumina NovaSeq 6000 Platform, contained three independent replicates per group (CK, Cold6h, and Cold48h). After removing the raw data reads containing adapters, N bases, and low-quality sequences, we secured 20.37–24.72 million clean reads. Subsequent to quality assembly, we generated 6.1 GB of clean reads with an impressive Q30 percentage > 91.58%. Gene expression levels were normalized by using TMM technology, and differentially expressed genes were determined based on following testing parameters: |log2-fold change|> 1 and P < 0.05. A total of 6024 DEGs (4100 upregulated and 2104 downregulated) in Cold6h vs. CK and 13,155 DEGs (6932 upregulated and 6223 downregulated) in Cold48h vs. CK were identified (Fig. A,B). Notably, a Venn diagram depicted the presence of 1415 shared DEGs across the Cold6h vs. CK, Cold48h vs. CK, and Cold48h vs. Cold6h comparisons (Fig. C and Supplementary Table ). These DEGs were verified to be correlated with sugar-related metabolite metabolism, ROS-scavenging and detoxification pathways, terpenoid biosynthesis and flavonoid biosynthesis pathways, plant hormone signal transduction, lipid metabolism, phenylpropanoid metabolism pathways (Supplementary Table ). The GO enrichment examination revealed that DEGs were classified into different ontologies: BPs (biological processes), CCs (cellular components) and MFs (molecular functions). The significant DEGs were mainly clustered into BPs and MFs. In the Cold6h vs. CK comparison, the DEGs were mainly associated with oxidoreductase activity (MF) and embryo development (BP) (Fig. A). In the Cold48h vs. CK comparison, the DEGs included cellular protein modification process, transmembrane transport, lipid metabolic process and carbohydrate process in the BP category. In addition, in the MF category, the DEGs participated in transferase activity, kinase activity, oxidoreductase activity, transcription factor (TF) activity, ion binding and transmembrane transporter activity (Fig. B). Afterwards, KEGG enrichment investigation was conducted based on DEGs information to examine influences of cold temperature on the enrichment pathways in SD. Several pathways, such as phenylpropanoid metabolism, plant hormone signal transduction, were significantly implicated under cold temperature. In addition, several other pathways were enriched, such as alpha-linolenic metabolism, flavonoid biosynthesis, the calcium signalling pathway, mineral absorption and alcoholism (Fig. C,D).
The diversity in gene expression patterns ultimately impacts the response of SD to cold stress. We identified 2071 putative TFs from 90 distinct families, with the top ten TFs belonging to the AP2/ERF-ERF (122), C2H2 (106), MYB-related (100), NAC (92), others (88), C3H (88), bHLH (85), WRKY (80), bZIP (68), and MYB (59) families (Fig. A). We employed WGCNA to explore the relationships between physiological indices and important DEGs. In WGCNA, gene clusters exhibiting high correlation were defined as modules, wherein genes within each module displayed strong correlations. WGCNA identified eight distinct modules, which were distinguished by different colours (Fig. B). The correlation coefficients between several physiological characteristics and these eight gene cluster modules were analysed (Fig. C). Notably, the MEyellow module exhibited significant correlations with SOD activity, MDA levels, and soluble sugar content ( P < 0.05) (Fig. D). Based on the PPI analysis results, we identified twenty hub genes within this intriguing module (Fig. E). These hub genes included two PP2Cs , two GH3s , and two HSPs . Importantly, all twenty hub genes were upregulated in the Cold6h vs. CK and Cold48h vs. CK comparison groups, thus suggesting their potential association with SD resistance to cold environments. Cold stress significantly affected the dynamic changes in plant hormones regarding SD. The ‘plant hormone signal transduction’ pathway was markedly enriched after Cold6h and Cold48h stress, and the DEGs enriched in this pathway included PP2C-24 , PP2C-50 , GH3-6 , SAUR50 , and ETR2 , which are critical genes participating in the ABA, auxin and ET signal transduction pathways. The expression levels of those DEGs were significantly correlated with group Cold48h, indicating that they are key genes sensitive to cold treatment (Fig. F).
To further explore the correlation between DEGs and DEMs of SD under cold stress, we executed a overall conetwork evaluation of the transcriptome and metabolome information. A Pearson correlation coefficient (PCC > 0.8) was utilized to construct a histogram based on the information derived from the DEGs and DEMs. The correlation analysis was visualized by using a nine-quadrant plot, wherein each quadrant represented distinct correlation scenarios between genes and metabolites. Quadrants 3 and 7 indicated positive correlations between DEGs and DEMs; moreover, quadrant 5 showed no significant correlation, whereas the remaining quadrants suggested negative correlations between DEGs and DEMs (Fig. A,B). Furthermore, KEGG enrichment analysis of DEGs and DEMs revealed consistent enrichment of starch and sucrose metabolism, ABC transporter, amino sugar and nucleotide sugar metabolism in the Cold48h vs. CK group ( P < 0.05), as well as enrichment of the phenylpropanoid biosynthesis pathway in Cold6h vs. CK group and enrichment of the plant hormone signal transduction pathway from Cold48h vs. Cold6h group. These findings highlighted the potential importance of starch/sucrose metabolism, as well as plant hormone signalling pathways, in conferring cold stress resistance to SD plants (Fig. C,D). Sugars and starches, as the primary carbohydrates within plants, serve both as sources of energy and as crucial substances for plant stress resistance. Sugars play multiple roles in cold adaptation, including acting as osmoprotectants to shield cells from freeze damage, serving as signalling molecules in the signal transduction pathways of the cold reaction, and modulating the antioxidant enzyme system to mitigate oxidative stress induced by low temperatures. Metabolomic analysis revealed significant accumulation of raffinose, sucrose, D-glucose-6P, β-D-fructose-6P and trehalose under cold stress (Fig. A,C). Moreover, gene expression analysis indicated positive regulation of various DEGs encoding enzymes participating in pathway of sugar metabolism , such as α-galactosidase ( α-Gal ), sucrose synthase ( SS ), sucrose phosphate synthase ( SPS ), invertase ( INV ), sugar cotransporter kinase ( ScrK ), trehalose-6-phosphate synthase ( TPS ), trehalase ( TREH ) and oligosaccharyltransferase B ( otsB ). qPCR analysis demonstrated the substantial regulation of several genes associated with sugar biosynthesis under cold stress, such as SdINV3 in the Cold6h group and SdSPS1/SdSS5 in the Cold48h group (Fig. B). In summary, our transcriptome and metabolome data suggest that sugar-related metabolic pathways, along with flavonoid/terpenoid biosynthesis and plant hormone signalling pathways, are primarily responsible for the resistance of SD to cold stress (Fig. ).
Within the scope of traditional Chinese medicine, SD has been established for millennia, not only for its therapeutic efficacy but also for the complex interplay between its phytochemical constituents and environmental conditions. The phytochemical profile of SD is significantly influenced by various environmental factors, including temperature, light, soil composition, and water availability. These components, which primarily consist of coumarins, polysaccharides, and volatile oils, are pivotal for its pharmacological properties. Research indicates that environmental stresses, such as temperature fluctuations, can induce secondary metabolite biosynthesis in plants. For instance, chromones levels in SD have been observed to increase due to abiotic stress . Under low-temperature stress, SD exhibits a fascinating adaptive mechanism that alters its active component profile. This physiological response is crucial for survival and the maintenance of medicinal properties under adverse conditions. A former literature confirmed that low-temperature stress substantially enhances the concentration of polysaccharides, which are thought to play protective roles against abiotic damage . Furthermore, the literature has explored gene expression changes in SD during drought stress, thus demonstrating the positive regulated expression of DEGs related to secondary metabolite pathways, including those responsible for coumarin and flavonoid biosynthesis . These findingsindicate that low-temperature conditions not only hinder plant survival but also trigger complex biochemical responses, thus enhancing plant medicinal and nutritional value. However, limited information is available on the key genes, enzymes and metabolites affiliated with the regulatory molecular biology mechanisms of low temperature resistance of SD. Herein, to elucidate the cold tolerance mechanism of SD, we conducted biochemical assays, along with transcriptome and metabolome analyses, under cold treatment at 4 °C. The results showed that the SD’s responses to cold stress were chiefly linked to plant hormone signalling pathways, starch and sucrose metabolism, secondary metabolite biosynthesis, and the activities of ROS-scavenging pathways. Under Cold6h and Cold48h stress, the downregulation of the chlorophyll-binding protein-encoding gene CP24 significantly suggested a decrease in the plants’ capacity to capture and utilize light effectively, thus potentially leading to reduced photosynthetic capabilities and overall energy acquisition in the SD treatment (Supplementary Table ). Chlorophyll-binding proteins, including CP24, are integral components of light-harvesting complex (LHC) related to photosystem II, which is essential for capturing and transferring light energy during photosynthesis . The buildup of MDA during cold stress reflects the extent of oxidative damage and can trigger a cascade of antioxidative responses (Fig. C). It can modify proteins and nucleic acids, thereby altering their function and potentially activating stress response pathways. An increase in MDA can also act as signal for the induction of antioxidant enzymes for instance SOD and CAT, which aid in mitigating oxidative damage . An elevation in MDA levels can also act as a signal for the induction of antioxidant enzymes such as SOD and CAT, which play a crucial role in ameliorating oxidative damage. Upon reviewing the transcriptomic data, we identified P5CS (pyrroline-5-carboxylate synthetase) genes associated with MDA biosynthesis in both the Cold6h and Cold12h groups in comparison to control. CAT plays the crucial role within detoxifying hydrogen peroxide (H 2 O 2 ), and an increase in CAT levels during cold treatment indicates an enhanced antioxidative defence mechanism. SOD is essential in the conversion of superoxide radicals into hydrogen peroxide and oxygen, thereby acting as a primary defence mechanism against ROS . The enzyme PAL is crucial in the biosynthesis of phenylpropanoids, which are vital for plant adjustment to environmental stresses. The upregulation of PAL indicates a highly accumulation in the content of phenolic ingredients, which participate in structural reinforcement, antioxidative responses, and signalling (Fig. I). In reaction to cold stress, the cooperative action of SOD, CAT, and APX enzymes involves their spatial and functional complementation within cellular compartments. Specifically, SOD promotes the conversion of superoxide radicals into oxygen and hydrogen peroxide. Afterward, CAT as well as APX detoxify hydrogen peroxide through distinct mechanisms, whereas CAT decomposes it into water and oxygen (primarily in peroxisomes); furthermore, APX not only detoxifies hydrogen peroxide but also regenerates ascorbate as part of the ascorbate–glutathione cycle for further ROS scavenging (Fig. J). We hypothesized that SOD and CAT would exhibit greater sensitivity to cold stress than PAL, whereas APX would play a predominant role in scavenging ROS in SD. We employed widely targeted metabolome analysis to investigate alterations in the metabolites of SD under low-temperature stress. A notable finding from our study is the significant accumulation of sugar-related metabolites, including sucrose, trehalose, and fructose (Fig. ). Sugars play pivotal roles ranging from serving as primary sources of energy to acting as signalling molecules that regulate various physiological processes in plant development and abiotic-stress tolerance. Sugars are not only metabolic fuels but also crucial regulators of gene expression, thus influencing plant survival and development . Under abiotic stresses, sugars act as osmoprotectants, thus playing a significant role in osmoregulation. This function is crucial for maintaining cellular integrity under stress by balancing the osmotic pressure within cells, thereby protecting them from damage caused by dehydration and ionic toxicity . For example, trehalose is a nonreducing sugar that is known for its role in stabilizing proteins and lipid membranes under stress conditions (including low temperatures), thus enhancing plant stress tolerance . Other sugar-related metabolites were also overaccumulated in our analysis (such as α,α-trehalose, sucrose, β-D-fructose 6-phosphate, stachyose, and α-lactose) (Supplementary Tables , ). Stachyose, which is a tetrasaccharide contains two α-D-galactose units, one α-D-glucose, and one β-D-fructose, has been identified as being a determinant component in plant reaction to different stress. It acts as an osmoprotectant, thus contributing to the maintenance of cell membranes and proteins during stress conditions. Peters and Keller demonstrated that stachyose levels increase in plants exposed to drought, thus suggesting its involvement in osmotic adjustment and protection against dehydration . Additionally, stachyose has been implicated in alleviating oxidative influence by neutralizing ROS, thus protecting cellular components from oxidative damage . Integrating the metabolome and transcriptome results, we deduced that increased levels of sucrose-phosphate synthase ( SPS ) and trehalose-phosphate synthase ( TPS ) may promote high level accumulation of sugars (Supplementary Table ). The SPS and TPS genes are crucial for the synthesis of glucose and trehalose, respectively, thus impacting the accumulation of sugar metabolites and playing vital roles in plant metabolism and stress response. The overexpression of SPS can significantly boost resistance to heat stress by improving various physiological and biochemical parameters (including increasing sucrose and chlorophyll contents), improving photosynthetic efficiency, and reducing cellular damage . At the molecular level, the induction of TPS and the coordination of downstream gene expression under low temperatures are regulated by a complex network of CBF/DREB1 and ICE1 . The results inferred that TPS and SPS may play key roles in the cold tolerance of SD. Nevertheless, the molecular regulatory mechanism of these two genes remains unclarify. In general, plant hormones such as auxins (IAA), gibberellins (GAs), ethylene (ETH), abscisic acid (ABA), salicylic acid (SA), and jasmonic acid (JA) coordinate a wide range of physiological and molecular responses that enable herbs to endure and adapt to unfavorable stress conditions. These hormones regulate diverse stress-responsive pathways, thereby contributing to the maintenance of cellular homeostasis, also the reinforcement of herb defense mechanisms . Under normal growth conditions, SnRK2-PP2C-SnRK1 forms a complex that contributes to the suppression of SnRK1 function. This facilitates the effective functioning of TOR (a growth-promoting factor), thereby promoting plant growth. Conversely, under adverse conditions, accumulated ABA enhances the interaction between PYR/PYL receptors and PP2C, thus resulting in the disassembly of the SnRK2-PP2C-SnRK1 complex . Cold treatment caused SnRK1 and PP2C to be upregulated and inhibited TOR activity, thereby suppressing plant growth and enhancing cold resistance (Supplementary Table ). In the auxin signalling pathway, the increased expression of ARF (auxin response factor) in the Cold6h and Cold48h environments not only controlled the increased expression of downstream SAUR and GH3 (which are hub genes) to cope with abiotic stress but also influenced the orientation of plant growth factors by binding to Aux/IAA proteins (Fig. ). In the JA signal transduction pathway, upregulated MYB TFs can bind to the promoters of genes involved in proline synthesis, such as P5CS (pyrroline-5-carboxylate synthetase), thereby enhancing their transcription and consequently promoting proline accumulation to improve resistance to abiotic stress (Fig. ). The R2R3-MYB type TF MtMYBS1 exhibits enhanced salinity resistance when organically expressed in Arabidopsis thaliana . We also found that OPR and LOX , illustrated as two crucial enzymes in the biosynthetic pathway of jasmonic acid, were upregulated. OPR controls the final step of JA synthesis, thus reducing OPDA to the precursor of JA, whereas LOX controls the initial step of JA synthesis, which involves the oxidization of unsaturated fatty acids to hydroperoxide (Supplementary Table ). In this study, the simultaneous action of ABA, IAA, and JA is hypothesized to contribute to the maintenance of SD growth under cold exposure. The metabolome analysis indicated that the metabolites with notable alterations participated in lipid metabolism and flavonoid and terpenoid biosynthesis (Supplementary Tables , ). It was observed that the majority of the top 20 DEMs were associated with flavonoids, terpenoids and lipids (Fig. ). The accumulation of flavonoids and terpenoids, which are important secondary metabolites found in medicinal herbs, primarily occurs in resistance to adverse environmental conditions and the mitigation of abiotic stresses. The biosynthetic pathway and regulatory mechanism of flavonoids and terpenoids in SD have not been extensively characterized, unlike those in other well-studied model plants. Based on the analysis of the top 20 DEMs, we found that 6-formylisoophiopogonanone A, flavokawain B and irsorhamnetin were the main flavonoids that accumulated. The compound 6-formyl-isoophiopogonanone A was identified as a homoisoflavonoid, and the majority of homoisoflavonoids exhibited certain scavenging abilities towards ·OH and H 2 O 2 in vitro . Flavokawain B was demonstrated to induce ER stress in glioma cells, subsequently leading to the activation of autophagy pathways . Isorhamnetin exhibited significant protective effects on heart muscle cells against oxidative stress-induced damage through two primary mechanisms: scavenging ROS and inhibiting the extracellular signal-regulated kinase (ERK) pathway . These flavonoids may synergistically upregulate key enzymes and detoxification proteins related to ROS cleaning. Additionally, under cold stress conditions, F3H , which is an essential enzyme in flavonoid biosynthesis, was observed to be overexpressed. (Supplementary Table ). The overproduction of the terpenoids ganoderiol A and forskolin enhanced resistance to environmental stress, as determined via the use of SIMX1 . In this research, several enzymes associated with terpenoid biosynthesis were identified: Tps (terpene synthase), S ps (solanesyl diphosphate synthase), FPS (farnesene synthase), HMGS (hydroxymethylglutaryl-CoA synthase), and HMGCR (3-hydroxy-3-methylglutaryl coenzyme A reductase). Among the identified DEMs, lipids accounted for a significant portion, especially unsaturated fatty acids (Supplementary Tables , ). Chilling-resistant plants exhibit elevated levels of unsaturated fatty acids in their membranes as a result of enhanced desaturase enzyme activity during cold acclimation, which enhances membrane fluidity and provides protection against low temperatures . Through transcriptome analysis, we that upregulated temperature-sensitive sn-2 acyl-lipid omega-3 desaturase (FAD) may promote the synthesis of unsaturated fatty acids. The FAD-catalysed condensation process is the crucial rate-limiting step in the biosynthesis of unsaturated fatty acids, thus employing substrate, product and location selectivity . Mutant Arabidopsis thaliana lacking Fad2 grows slowly in low-temperature environments and exhibits significantly lower cold resistance than wild-type plants . Our study indicated that FAD plays an important role in cold resistance in SD; nevertheless, the mechanism by which FAD regulates cold resistance remains unclear. To summarize, our findings indicate that sugar-related metabolism, flavonoid and terpenoid biosynthesis, unsaturated fatty acid biosynthesis, and plant hormone signalling are the main protective mechanisms employed by SD against cold stress. This information will support us better estimate the significant effects of low-temperature climates on SD.
Our findings clearly illustrate a comprehensive analysis integrating physiological parameters with metabolomic and transcriptomic data to elucidate crucial metabolites, genes, and pathways associated with cold stress in SD plants. This approach identified a total of 1396 differentially expressed metabolites (DEMs), including 138 DEMs for the comparison between Cold6h versus CK as well as 85 DEMs for Cold48h versus CK. Integration of transcriptomic data led to annotation of 1415 common differentially expressed genes (DEGs) across these comparisons. Furthermore, the combined insights from transcriptomics and metabolomics unveiled mechanisms governing SD’s response to low temperatures. Exposure to 4 °C was found to modulate sugar-related metabolic pathways, flavonoid/terpenoid biosynthesis, and plant hormone metabolism through regulation of gene expression involving PP2C , AUX , CH3 , TSP , GPI , INV3 , otsB2 , SPS and SS while elevating levels of antioxidant compounds and plant hormones in order to sustain normal growth. In conclusion, our results suggest that protective responses mounted by SD against cold stress primarily involve sugar-related metabolism, secondary metabolite synthesis, unsaturated fatty acid production,and plant hormone signaling. These findings will facilitate further assessment of the profound impacts exerted by low-temperature climates on SD.
Supplementary Information. Supplementary Information.
|
Rare pediatric tumors in Germany – not as rare as expected: a study based on data from the Bavarian Cancer Registry and the German Childhood Cancer Registry | 1b69d6f2-677a-4ceb-bd7b-f3f8ad3748f8 | 9192393 | Internal Medicine[mh] | About 2200 children and adolescents < 18 years with diagnoses of malignant diseases and central nervous system (CNS) tumors are reported to the German Childhood Cancer Registry (GCCR) every year . According to the RARECARE definition classifying tumors with an incidence rate of < 60/1,000,000 per year as rare, all childhood malignancies would have to be considered rare diseases . Despite this, the treatment of childhood cancer is highly standardized. The German Society for Pediatric Oncology and Hematology (GPOH) and other international study groups ensure the implementation of clinical trials and the development of specific treatment guidelines. Thereby, over 90% of all children with malignancies in Germany are treated according to standard therapy protocols and are enrolled in clinical treatment trials whenever possible. This has led to a remarkable improvement in prognosis with a 15-year overall survival (OS) of 82% nowadays . While many of those entities occur more frequently, several cancer types belong to the heterogeneous group of very rare tumors (VRT) with an incidence rate of < 2/1,000,000 per year or a lack of entity-specific pediatric studies as defined by the European Cooperative Study Group for Pediatric Rare Tumors (EXPeRT) . Estimations of the occurrence of these VRTs are available from several countries. However, the exact incidence rate remains difficult to determine . Previous analyses mainly determined the proportions of VRTs in relation to all childhood malignancies instead of stating incidence rates. As inclusion criteria of different registers regarding age and tumor entities vary, the comparability of proportions is limited to some extent. An analysis of the GCCR over a 10-year period showed that only 1.2% of all registered patients met the EXPeRT definition of a VRT . But the authors already concluded that numbers were probably underestimated due to a lack of registration of specific diagnoses. The Italian Pediatric Rare Tumor Group (TREP) estimated a proportion of VRTs in childhood between 8 and 10% of all pediatric cancers . An analysis of the American Surveillance, Epidemiology, and End Results (SEER) registry estimated that 8% of cancer patients under the age of 15 years and 14% of cancer patients under the age of 20 years were diagnosed with an entity classifying as a VRT according to the EXPeRT definition . With regard to these varying reports on the occurrence of VRTs in childhood, the aim of the present analysis was to assess the degree of underregistration and estimate a more realistic incidence rate of rare pediatric tumors in Germany.
We obtained data on pediatric cancer cases from the Bavarian Cancer Registry (BCR), a population-based cancer registry of the second-largest federal state of Germany with approximately 13 million inhabitants. Cancer registration in Germany is conducted by population-based public cancer registries on the level of federal states according to the Federal Cancer Registry Data Act. The registration completeness is estimated to be ≥ 90% since 2003 . In Bavaria, hospital physicians, registered doctors, dentists, and pathologists are entitled to pass the patients’ data on to their respective regional clinical cancer registries, which transfer the recorded data to a central confidentiality office where the data is pseudonymized and finally passed on to the overarching registration office in the BCR . Herein, all malignant neoplasms are recorded as well as all CNS tumors and tumors of borderline histology. It is assumed that the occurrence of childhood cancer in Bavaria is representative for Germany as the respective incidence rates of childhood cancer do not differ significantly between Bavaria and the rest of Germany . All patients registered within the BCR meeting the following inclusion criteria were included in the analysis: diagnosis of malignant disease with the codes of the International Classification of Diseases (ICD) C00–C97, first diagnosis at age < 18 years, and diagnosis between 2002 and 2014. Pseudonymized data was additionally derived from the registry for patient-related data (sex, month and year of birth, municipality code) and tumor-related data (month and year of diagnosis, age at diagnosis, cancer site (ICD) and histology per International Classification of Diseases for Oncology (ICD-O), status per TNM Classification of Malignant Tumors (TNM), grading). For the extraction of patients with VRTs, we applied the definition of the EXPeRT group: “any solid malignancy or borderline tumor characterized by an annual incidence < 2/million and/or not already considered in clinical trials” . The respective entities were selected using ICD, ICD-O, and the International Classification of Childhood Cancer (ICCC) based on the consensus listing of rare pediatric cancers as well as the entity-specific pediatric studies of the GPOH . Duplicate reports could be excluded by rechecking specifications like municipality code, diagnosis, month and year of diagnosis, and month and year of birth. The nationwide German Childhood Cancer Registry (GCCR) records incident cases of all malignancies as well as non-malignant CNS tumors diagnosed in 0- to 17-year olds in Germany, reported by all pediatric hematology-oncology units in Germany (subject to the patient’s or custodians consent). Before 2009, only patients aged < 15 years were recorded by the GCCR. The analysis and reporting of childhood cancer incidence rate estimates in Germany are usually based on data from the GCCR . Patients with the same combination of age, sex, year of diagnosis, and diagnostic group as well as residence in Bavaria in both databases were identified and numbers were compared. Partially, the distribution of ICD-O morphology codes for similar cases differed slightly. For our evaluation, we used the ICD-O codes of the BCR as this registry was our primary data source. For a small number of patients ( n = 45/4615), the respective cancer entity was not defined clearly in the BCR as either ICD or ICD-O was missing; these cases were excluded from our analysis. We calculated crude incidence rates, determined as the annual number of cases per person-years calculated as the average population count between 2002 to 2014. Bavarian childhood population estimates were obtained from the Bavarian State Office for Statistics.
Between 2002 and 2014, the BCR recorded 4615 children diagnosed with malignancy at ages 0–17 years with a median age at diagnosis of 9 years. This corresponds to an average annual crude incidence rate of 160 per 1,000,000 children of this age group. Crude incidence rates by diagnostic group are presented in Table . We identified 990 patients (21.5% of all malignancies) with a cancer type that is estimated to have an incidence rate lower than 2 per 1 million or classifies as an orphan disease without consideration in entity-specific studies. Out of these, 290 cases (6.3% of all malignancies) could not be enrolled within an entity-specific study or registry of the GPOH as no such study or registry was available and thus are considered rare in the sense of the EXPeRT definition. This corresponds to a crude annual incidence rate of all pediatric VRTs in Bavaria of 10.1 per million. The diagnostic groups and specific cancer types of these cases are shown in Table . The most common tumor types among VRTs were malignant melanoma ( n = 134, 46.2%) followed by the group of other malignant epithelial neoplasms ( n = 94, 32.4%). The frequency of patients with VRTs was relatively stable over the observed time period of 13 years as displayed in Fig. . Among VRTs registered to the BCR, 34 (11.7%) patients were younger than 10 years and 256 (88.3%) older than 10 years. Accordingly, the median age at the time of diagnosis of a pediatric VRT was 15 years. The age distribution was similar among other rare pediatric tumors with an annual incidence rate of < 2 per million which were included in entity-specific studies or registries (< 10 years: 32.9%; ≥ 10 years: 67.1%). In contrast, the more frequent entities, characterized by an annual incidence rate of > 2 per million were predominantly diagnosed in children < 10 years (59.9%). Malignant melanoma was the most common tumor type in both age groups followed by skin carcinomas. In children ≥ 10 years, the third frequent subgroup was gonadal carcinomas. The male to female ratio was 1:1.17. Malignant melanoma was the most common entity in both genders, followed by various carcinoma and gonadal tumors. At diagnosis, advanced disease stages of VRTs were rarely detected. In most reported cases, there were no lymph node metastases (85.2%) and distant metastases (94.9%) at diagnosis. However, data on TNM staging was missing in about 40% of cases in the recordings of the BCR. In the same time period and with similar inclusion criteria, the number of registered cases of VRTs in the GCCR was considerably lower than that in the BCR. The GCCR reported 49 cases of VRTs in Bavaria, which corresponds to 16.9% of the patients with VRTs recorded in the BCR. While the GCCR did not record patients > 15 years before 2009, the proportion recorded in the registered cases remains the same for the time span 2009–2014 (15.5%), when patients with cancer diagnoses at age 15–17 years were included in the registry. The GCCR reported 22 cases of VRTs in Bavaria between 2009 and 2014, whereas the BCR recorded 142 cases of VRTs during the same period. Further details on this comparison are presented in Table . When comparing proportions of cases reported to the GCCR and BCR, 44.1% of VRT cases in the BCR in the age group < 10 years were recorded with the GCCR, while only 13.2% of VRT cases in the BCR in the age group ≥ 10 years were recorded with the GCCR. The discrepancy between the respective recording frequencies may at least in part be explained by the coverage of more cases belonging to ICCC-3 class XI “Other malignant epithelial neoplasms and malignant melanomas” in the BCR ( n = 125) than in the GCCR ( n = 14). Furthermore, we found slightly higher counts for category VIII (“Malignant bone tumors”) and category X (“Germ cell tumors, trophoblastic tumors, and neoplasms of gonads) in Bavaria. When comparing numbers by diagnosis, we found that malignant melanoma and other malignant skin cancers were the most common missing cases in the GCCR.
We identified 290 cases of rare childhood cancers according to the EXPeRT definition in Bavaria with diagnosis between 2002 and 2014, corresponding to 6.3% of all registered malignancies in the BCR and a crude annual incidence rate of all VRTs of 10.1 per million. This compares to an analysis of the American SEER registry, which found a crude annual incidence rate of all VRTs of 21.3 per million and estimated that 14% of cancer cases under 20 years belonged to rare entities. This difference could likely be attributed to the inclusion of older age groups in the SEER cohort as the most common pediatric VRTs occur with older age . Furthermore, the SEER analysis used a different definition of VRTs than our analysis, as it includes tumors that are classified as very rare only in the age group 0–14 years but with a higher frequency in the age group 0–19 years. In addition, some VRTs are recorded in entity-specific studies in Germany despite their rarity but are not included in the SEER analysis. Thus, tumors like hepatoblastoma, hepatic carcinoma, thyroid carcinoma, and Ewing sarcoma were included in the SEER analysis. Therefore, these entities contributed substantially to the higher incidence rate of VRTs in the SEER analysis compared to our analysis. However, the incidence of VRTs registered at the BCR might still rise to a certain extent during the next years, even if the registration completeness has already met ≥ 90%. Nevertheless, reporting cancer diagnoses to the BCR has become mandatory by law only in 2017 and as a result registration may still increase . However, a limitation of our analysis has to be considered, as VRTs in the BCR had specific ICD-O morphology codes but possible changes in diagnosis at a later time after initial registration may not always have been recorded in the BCR. The numbers of detected cases of VRTs in the BCR were set against cases registered in the GCCR in the same time period. When comparing the registration rates for VRTs not reported to clinical studies or registries, there was a significant registration gap between the BCR and the GCCR, which was most evident among adolescent VRT patients. In fact, only 49 of the 290 cases registered in the BCR were also reported to the GCCR. This significant registration gap is most likely due to differences in the registration structure between the GCCR and the BCR. In the BCR, all cancer cases in Bavaria were reported by treating physicians as well as pathologists, irrespective of age and treating department. In contrast, the GCCR only received notifications from specialized pediatric oncology units and not from hospitals that specialized in cancer care for adult patients. These notifications were limited to patients under the age of 15 years until 2008. Afterwards, the GCCR also recorded reports of cancer patients up to the age of 18 years . However, the inclusion of patients aged 15–17 years in the GCCR did not increase the reporting of VRTs considerably. Accordingly, these patients with VRT do not seem to have been treated in pediatric hematology-oncology units that are used to report to the GCCR. In accordance with previous studies, our analysis showed that VRTs are more common in adolescents, as 88% occurred in patients aged 10–17 years . As many of these are epithelial neoplasms or gonadal tumors that occur more frequently in adulthood, a significant proportion of the adolescent patients with VRTs may be treated in adult oncological therapy units (e.g. dermatooncology, gynecooncology, ENT oncology, medical oncology) and may thus not be reported to the GCCR. In our comparison of BCR and GCCR data, malignant melanoma and skin carcinoma account for nearly 60% of VRTs and are substantially underreported in the GCCR. Diagnosis in early stages of disease without metastases may have favored the omission of interdisciplinary care of patients including pediatric oncology. When comparing the registration numbers of pediatric malignant melanoma in different registries, the phenomenon of under-registration becomes even more evident. An earlier analysis of the GCCR revealed 55 cases of malignant melanoma with an age < 18 years in Germany over 10 years . Another publication analyzed the German Central Malignant Melanoma Registry (CMMR), which receives data from cooperating dermatology departments and dermatologic practitioners throughout Germany. The CMMR registered 443 pediatric patients ≤ 18 years of age over a time period of nearly 30 years . The comparison of these numbers indicates that an entity such as malignant melanoma, which is common in adults but rare in children, is often treated in adult centers. However, as we found 134 pediatric cases of malignant melanoma registered in 13 years in Bavaria alone, a relevant underregistration has to be postulated not only for the GCCR but also for the CMMR, both being dependent on voluntary registration. Thus, the incidence of childhood malignant melanoma in Germany is likely to be considerably higher than previously described. Similarly, the incidence of other rare pediatric tumors, particularly in adolescence, may be underestimated, as current reports on the occurrence of cancer in childhood in Germany are based on data from the GCCR . Obviously, older children with rare “adult-type” tumors may benefit from the experience of an adult specialist . For example, pediatric patients with malignant melanoma, who were treated by adult dermatologists, had a comparable outcome to adult patients . Besides, pediatric patients with adult cancer types may benefit from early clinical trials in adults, being selected for compassionate use of similar strategies and medications. Nevertheless, regular recording of childhood malignant melanoma in a distinct registry could facilitate the identification of characteristics specific to this age group that could influence age-specific treatment guidelines. Furthermore, several entities, such as colorectal carcinoma, are known to show differences in tumor biology and behavior depending on whether they occur in children or adults . Besides, the occurrence of epithelial malignancies in childhood is more frequently associated with tumor predisposition syndromes. Moreover, long-term treatment-associated morbidity has a much stronger impact on children compared to elderly patients . Therefore, standards of diagnosis and treatment of certain malignancies should not simply be transferred from adults to children . It is important to develop close collaboration between pediatric and adult specialists on certain cancer entities to ensure the best treatment possible for children suffering from rare cancers. The underregistration of patients in the German childhood cancer registry illustrated the need to improve registration structures for children and adolescents in Germany and to strengthen data exchange between adult and pediatric clinical cancer registries as well as epidemiological registries. Based on comprehensive registration, clinical and research networks can be intensified, which will allow patients access to the best possible treatment and clinical research, despite the extreme rarity of their disease.
We conclude that forces need to be united to enable a better registration of rare pediatric tumor cases. This can be achieved by improved cooperation between distinct pediatric and adult oncological departments within the established comprehensive cancer centers. Thus, registration of all cases of rare cancers in childhood and adolescence shall be ensured independent from the treating department. While German legislation has addressed the issue of regular data exchange between clinical and epidemiological cancer registries in 2021, the new law only requires the development of a joint concept for cooperation between state cancer registries and the GCCR . Furthermore, a nationwide mandatory reporting of all childhood cancers should be established to optimize recording and close registration gaps. This will be the prerequisite for a better understanding of rare entities, further research, the establishment of clinical trials, and the development of evidence-based pediatric guidelines for diagnosis and treatment of these tumors. Realizing the importance of improved care for rare cancers in children, the German Registry for Rare Pediatric Tumors (STEP) and the EXPeRT group developed an interdisciplinary network of childhood and adult cancer experts on a national and international level. This network will ensure the optimal treatment of rare pediatric tumors according to the latest clinical experiences and research findings.
|
Maoto, a traditional herbal medicine, for post-exposure prophylaxis for Japanese healthcare workers exposed to COVID-19: A single center study | b78df5a0-4526-4888-bcec-253a2079aeb1 | 8934734 | Pharmacology[mh] | Introduction The COVID-19 pandemic caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) is having great impact on the worldwide health system. Because healthcare workers (HCWs) in hospitals are at extreme risk of exposure to SARS-CoV-2, the management of exposure events to limit nosocomial infections is of great concern . Long term, pre-exposure prophylaxis (PrEP) for COVID-19 through vaccination has been shown to be effective , however, it is possible that vaccines for COVID-19 will become less effective against SARS-CoV-2 variants that can escape natural or host immunity provided by these vaccines . Various methods of post-exposure prophylaxis (PEP) administered soon after exposure to COVID-19 have been done, with numerous studies reporting PEP or PrEP for COVID-19 . However, at present, no chemoprophylaxis regimen for COVID-19 is available. Although some research has reported some effectiveness for hydroxychloroquine , recent WHO guidelines do not recommend its use as prophylaxis . Ivermectin is widely used for the treatment of COVID-19, especially in India and South America, but there is little evidence of benefit . Vitamins C and D, povidone iodine gargle, iota-carrageenan spray, and monoclonal antibodies against SARS-CoV-2 are candidates for the prophylaxis for COVID-19, but there is little evidence so far to support them . Traditional herbal medicines have long played important roles in the Far East, especially Japan, China, and Korea. Traditional herbal medicines, called Kampo, are accepted by the national medical insurance system of Japan, and there is widespread use of these medicines by Japanese physicians . We previously reported two clinical trials showing that maoto (ma-huang-tang in Chinese) was effective in treating seasonal influenza in a comparison with neuraminidase inhibitors . The mechanism suggested was that influenza virus particles remained in endosomes because of a failure in viral fusion with the endosome membrane through the elevation of the endosomal pH condition . The anti-viral effect of maoto was confirmed in an experimental mouse infection model of influenza . Other than influenza, we recently reported a case of COVID-19 treated with maoto in which high fever and viral load were reduced . Some kampo medicine candidates for COVID-19 have recently been reported, but no large-scale clinical trials have been done . We herein report our retrospective investigation of the efficacy of maoto as PEP for HCWs in a hospital that experienced an outbreak of nosocomial infection with SARS-CoV-2.
Methods 2.1 Study design This was a cohort study done in a private Japanese hospital that experienced a nosocomial COVID-19 cluster in 2021. The primary endpoint was to evaluate the efficacy and tolerability of maoto as PEP for at-risk HCWs. The hospital has three wards with 175 beds. Most of the inpatients are elderly and are undergoing long-term care. Standard precautions were taken in this hospital. When febrile inpatients were observed in ward 1 of this hospital in early April, on 11 April all inpatients in this ward were tested for COVID-19 by real-time polymerase chain reaction (PCR), with 18 testing positive. The outbreak then spread to ward 2, which is located next to ward 1 and shares the same dining room. During the outbreak period from 11 to 30 April, 44 inpatients and 26 HCWs were positive by PCR for SARS-CoV-2. Our infection control team (ICT) implemented procedures for strict infection zoning and began requiring personal protective equipment (PPE) for the 55 HCWs in and around wards 1 and 2 on 14 April . At the beginning of this study on 17 April, there were 55 HCWs working in the Covid-19 zone in all, and maoto granules for medical use were prescribed to 42 of these 55 HCWs during 17–19 April for three days by the infection control doctor. The HCWs who rejected PEP (N = 13) were assigned to a control group. None of the subjects had been infected with COVID-19 or been vaccinated. None had severe underlining diseases, and none were examined by blood tests or X-rays on the day of prescription. The observation period was from 17 to 24 April, during which all participants received PCR once or twice a week or when presenting the symptoms of COVID-19. The duration of the observation period was determined based on the fact that maoto prescription was for three prescription days. PCR samples were collected from nasopharyngeal swab for examination by the Fukuoka Public Health Center. The day of diagnosis was the day of sampling. The result of PCR was generally reported on the day after sampling. After the observation period, the authors confirmed whether or not the subjects completed the maoto regimen, presented adverse reactions, or had been infected with COVID-19. If a subject was diagnosed with COVID-19, the authors questioned them about fever and other symptoms and where they recuperated during the acute stage of COVID-19: at home, an assigned hotel, or in a hospital. The study was approved by the Institutional Review Board of Meotoiwa Hospital (#2021–001). 2.2 PEP Maoto was selected for PEP because 1) it is a clinically proven drug for common cold and influenza, which have many common symptoms with COVID-19, 2) its cost effectiveness, and 3) a case report describing the efficacy of maoto for COVID-19 . Maoto is a multicomponent formulation extracted from four plants: Ephedrae Herba, Cinnamomi Cortex, Armeniacae Semen, and Glycyrrhize Radix . Maoto granules in commercial medical dosage form (TJ-27) were purchased from Tsumura, Tokyo. It was prescribed without insurance, and administered orally at 2.5 g, three times a day, for three days, total 22.5 g. No other PEP was administered. 2.3 Statistical analysis Statistical analysis of PEP efficacy and background factors of the participants was by Fisher's exact test, except for the mean age between the groups, which was done by student's t-test. P values less than 0.05 were considered significant. Data were analyzed with GraphPad Prism software (San Diego, California, US). The efficacy of prophylaxis was calculated as follows. Prophylactic effectiveness % = (ARU – ARP) / ARU x 100 ARU: Attack rate without prophylaxis, ARP: Attack rate with prophylaxis.
Study design This was a cohort study done in a private Japanese hospital that experienced a nosocomial COVID-19 cluster in 2021. The primary endpoint was to evaluate the efficacy and tolerability of maoto as PEP for at-risk HCWs. The hospital has three wards with 175 beds. Most of the inpatients are elderly and are undergoing long-term care. Standard precautions were taken in this hospital. When febrile inpatients were observed in ward 1 of this hospital in early April, on 11 April all inpatients in this ward were tested for COVID-19 by real-time polymerase chain reaction (PCR), with 18 testing positive. The outbreak then spread to ward 2, which is located next to ward 1 and shares the same dining room. During the outbreak period from 11 to 30 April, 44 inpatients and 26 HCWs were positive by PCR for SARS-CoV-2. Our infection control team (ICT) implemented procedures for strict infection zoning and began requiring personal protective equipment (PPE) for the 55 HCWs in and around wards 1 and 2 on 14 April . At the beginning of this study on 17 April, there were 55 HCWs working in the Covid-19 zone in all, and maoto granules for medical use were prescribed to 42 of these 55 HCWs during 17–19 April for three days by the infection control doctor. The HCWs who rejected PEP (N = 13) were assigned to a control group. None of the subjects had been infected with COVID-19 or been vaccinated. None had severe underlining diseases, and none were examined by blood tests or X-rays on the day of prescription. The observation period was from 17 to 24 April, during which all participants received PCR once or twice a week or when presenting the symptoms of COVID-19. The duration of the observation period was determined based on the fact that maoto prescription was for three prescription days. PCR samples were collected from nasopharyngeal swab for examination by the Fukuoka Public Health Center. The day of diagnosis was the day of sampling. The result of PCR was generally reported on the day after sampling. After the observation period, the authors confirmed whether or not the subjects completed the maoto regimen, presented adverse reactions, or had been infected with COVID-19. If a subject was diagnosed with COVID-19, the authors questioned them about fever and other symptoms and where they recuperated during the acute stage of COVID-19: at home, an assigned hotel, or in a hospital. The study was approved by the Institutional Review Board of Meotoiwa Hospital (#2021–001).
PEP Maoto was selected for PEP because 1) it is a clinically proven drug for common cold and influenza, which have many common symptoms with COVID-19, 2) its cost effectiveness, and 3) a case report describing the efficacy of maoto for COVID-19 . Maoto is a multicomponent formulation extracted from four plants: Ephedrae Herba, Cinnamomi Cortex, Armeniacae Semen, and Glycyrrhize Radix . Maoto granules in commercial medical dosage form (TJ-27) were purchased from Tsumura, Tokyo. It was prescribed without insurance, and administered orally at 2.5 g, three times a day, for three days, total 22.5 g. No other PEP was administered.
Statistical analysis Statistical analysis of PEP efficacy and background factors of the participants was by Fisher's exact test, except for the mean age between the groups, which was done by student's t-test. P values less than 0.05 were considered significant. Data were analyzed with GraphPad Prism software (San Diego, California, US). The efficacy of prophylaxis was calculated as follows. Prophylactic effectiveness % = (ARU – ARP) / ARU x 100 ARU: Attack rate without prophylaxis, ARP: Attack rate with prophylaxis.
Results 3.1 Study subjects Of the 55 HCWs in wards 1 and 2 (zoning area), 42 were administered maoto as PEP for three days (total 22.5 g), and 13 rejected it . Adherence to the maoto regimen (22.5 g) was complete for 39, 2 took 15 g, and one took 7.5 g. The mean and median total dosages of maoto were 21.2 g and 22.5 g, respectively. Epigastralgia is a known adverse reaction to maoto, but no adverse reactions were reported for the participants in the present study. No significant differences were found between the test and control groups in terms of profession, sex, or mean age. All of the subjects wore PPE in the isolation wards. 3.2 Prophylactic effect of maoto During the observation period, laboratory-diagnosed COVID-19 in the maoto group (N = 3, 7.1%) was significantly less than in the control group (N = 6, 46.2%) . shows the subjects who contracted COVID-19. All the HCWs with COVID-19 became positive from 19 to 21 April, within a few days after the prescription of maoto. Fever was seen in one person in the maoto group and in two in the control group. No symptoms were seen in one person in the control group. Other symptoms included rhinorrhea in the maoto group and rhinorrhea, sore throat, and impaired smell in the control group. No hospitalization or death was found in either group. The effectiveness of maoto for prophylaxis in the present study was 84.5%.
Study subjects Of the 55 HCWs in wards 1 and 2 (zoning area), 42 were administered maoto as PEP for three days (total 22.5 g), and 13 rejected it . Adherence to the maoto regimen (22.5 g) was complete for 39, 2 took 15 g, and one took 7.5 g. The mean and median total dosages of maoto were 21.2 g and 22.5 g, respectively. Epigastralgia is a known adverse reaction to maoto, but no adverse reactions were reported for the participants in the present study. No significant differences were found between the test and control groups in terms of profession, sex, or mean age. All of the subjects wore PPE in the isolation wards.
Prophylactic effect of maoto During the observation period, laboratory-diagnosed COVID-19 in the maoto group (N = 3, 7.1%) was significantly less than in the control group (N = 6, 46.2%) . shows the subjects who contracted COVID-19. All the HCWs with COVID-19 became positive from 19 to 21 April, within a few days after the prescription of maoto. Fever was seen in one person in the maoto group and in two in the control group. No symptoms were seen in one person in the control group. Other symptoms included rhinorrhea in the maoto group and rhinorrhea, sore throat, and impaired smell in the control group. No hospitalization or death was found in either group. The effectiveness of maoto for prophylaxis in the present study was 84.5%.
Discussion The present study shows that maoto would be useful in outbreak situations for preventing the spread of COVID-19 among HCWs. Significantly fewer subjects were infected with SARS-CoV-2 in the maoto group than in the control group. Although the study was observational and of small size, it has some unique characteristics. The most unique point is that it was done over the course of a COVID-19 cluster in a single hospital, in which 71 HCWs and inpatients were infected with COVID-19 within 4 weeks. Next, all of the subjects were working in a designated COVID zone, where they had high risk of exposure to the virus. This unusual situation provided a good opportunity to evaluate PEP. Because this hospital takes care of many frail elderly needing long-term care, nursing staffs had more risk to catch COVID-19 than physicians and rehabilitation therapists. All subjects were previously unvaccinated and not infected with COVID-19, which avoided the bias of immunity. Last, unlike subjects exposed through household infection, medical follow-up of HCWs in a single hospital is relatively easier, and it is easier to confirm adherence to the maoto regimen. A randomized controlled trial (RCT) would have provided the strongest evidence, but much time is needed to prepare and implement an RCT protocol, thus doing an RCT was not practical in this critical, time-sensitive situation. Although the results of the present study may not be conclusive, they are valuable given the circumstances. Because COVID-19 has become pandemic, many drugs have been tried for its prevention, however, no effective prophylaxis, except for the vaccine, is available . Vaccines have a neutralizing effect against virus epitopes, but take a few weeks to generate the neutralizing antibodies in vaccinated people and thus are not suitable for post-exposure use. The ideal PEP needs both a therapeutic effect and a prophylactic effect because the virus may already have infected the host when they take PEP. Adverse effects are also important, and maoto has been shown to have few. Many prophylactic and therapeutic drugs have been proposed for use against COVID-19 , such as hydroxychloroquine, ivermectin, and monoclonal antibodies (casirivimab and imdevimab) . Other traditional herbal medicines became candidates for the treatment of COVID-19 . In Japan, clinical trials of traditional herbal medicines, managed by the Japan Society for Oriental Medicine, are in progress . We recently reported a COVID-19 case treated with maoto in which we showed that it relieved fever and reduced viral load . This case led us to the idea of using maoto as prophylaxis against COVID-19. Kampo has many drugs other than maoto for the treatment of acute febrile diseases, such as COVID-19. We think that clinically proven Kampo medicines can be repurposed for PEP, with clinical advantages such as low cost, tolerability, and already widespread use in Japan. We previously reported that maoto inhibited endosomal acidification, showing that influenza viruses could not enter the cytosol . Recently, chloroquine was also reported to inhibit endosomal acidification and to block SARS-CoV-2 infection to the cytosol . Although the anti-SARS-CoV-2 mechanism of maoto is not clear, it has the above-mentioned mechanisms in common with chloroquine. We recently showed in yet to be published data that maoto components specifically interact with G glycoprotein of respiratory syncytial virus (RSV), which blocks the attachment of RSV to the host receptor. It is possible that maoto components may also interact with SARS-CoV-2 surface proteins and block the infectivity. Future, basic research and larger clinical trials of maoto for the treatment and prevention of COVID-19 will be important. This study has some limitations. First, we could not prepare in advance a protocol for the use of maoto for COVID-19 prophylaxis because the cluster in this hospital happened suddenly. As we left the decision to use maoto up to the subjects, the number of control subjects was much smaller than that of the maoto group, and the subjects taking maoto may have a stronger awareness of infection control against COVID-19 than the control subjects. The day of starting maoto was delayed to mid-April, although it would have been better to start when the cluster was first identified. Clinical studies of chemical prophylaxis for COVID-19 have a major problem because it is not possible to know where or when a cluster will happen, and thus a protocol cannot be prepared in advance. Second, the infection zoning started three days before the start of PEP. It is possible that there was some effect on the reduction of COVID-19 due to the zoning and use of PPE . We think the zoning would have taken time to become effective, probably in late April, because patients may be in the early incubation period when zoning is enforced, thus we think the intervention with maoto was the reason for the low number of infections seen. Chemical prophylaxis has the advantage of inhibiting COVID-19 in the incubation period.
Conclusion This is a cohort study of maoto for three days as PEP for HCWs exposed to COVID-19 in the isolation wards of a hospital with a COVID-19 cluster. HCWs who became COVID-19-positive were significantly fewer in the maoto group than in the control group. This suggests that the short-term administration of maoto is effective as PEP for health care professionals working with patients suffering from COVID-19. Although some vaccines have proven highly effective, PEP will continue to be important for protecting HCWs at high risk of infection and for non-immunized populations.
This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.
All authors meet the ICMJE authorship criteria. Study concept and design: AN, AS, SN. Acquisition of data: AN, KI, YK, SM, SI, MI, TI. Statistical analysis: AN, AS, SN. Drafting and finalization of manuscript: AN, SN. Critical revision of the manuscript: all authors. Study supervision: SN. All authors have read and approved the final manuscript.
None.
|
Structure and establishment of the German Cochlear Implant Registry (DCIR) | 2cbbe9f7-0f0c-46d3-875a-3d2195ad954b | 10409674 | Otolaryngology[mh] | Cochlear implant (CI) treatment is a very successful but also complex and lifelong process for patients who suffer from profound hearing loss or deafness . The medical–scientific basis of CI care in Germany is highly standardized and defined in detail by an AWMF guideline ( Arbeitsgemeinschaft der Medizinisch-Wissenschaftlichen Fachgesellschaften = Working Group of the Medical–Scientific Societies, register no. 017-071; ). In an elaborate consensus process, the current third version of the guideline was adopted in 2020 under the leadership of the German Society of Otorhinolaryngology, Head and Neck Surgery e. V. (DGHNO-KHC) with other medical societies that are also important for CI care, the German Society of Phoniatrics and Pediatric Audiology (DGPP) and the German Audiological Society (DGA). The current guideline describes not only the medical and scientific standard of diagnostics, surgery, and postoperative care currently applicable in Germany, but also the necessary lifelong aftercare process. This document is thus a milestone in the quality control of CI care not only in Germany, but worldwide. For the first time, essential aspects of structural quality, process quality, and outcome quality for CI care have been defined here. Based on this guideline, a practical recommendation for the implementation of the guideline content was developed by the Executive Committee of the DGNHO-KHC and also jointly consented upon by the relevant medical societies (DGHNO-KHC, DGPP, DGA; CI white paper of the DGHNO-KHC; ). The quality control of a complex and lifelong process, such as CI therapy, represents a major challenge. The ultimate responsibility for the lifelong CI care process undoubtedly lies with the institution providing cochlear implants (CIVE). Usually, this is a main department for otorhinolaryngology, head and neck surgery . This is based on medical responsibility (e.g., medical indication, surgical implantation, coordination, and responsibility for the overall process). In addition, there are legal requirements (“Medical Devices Operator Ordinance”—MPBetreibV), according to which the CIVE is to be regarded as the “operator of the implant” .
The use of medical registries is an effective tool for quality control in this context and at the same time for collecting scientific data that can also form the basis for future guideline developments. This is especially true if these clinical data are not only collected on a multicenter basis, but on a nationwide basis. For a number of medical implants or diseases, medical registries have already been operated very successfully for many years. The trauma registry and the endoprosthesis registry should be mentioned here. Although CI care has been available in Germany since the end of the 1980s, there are still insufficient national data on the number of patients treated with CIs, complications, manufacturer-independent recording of implant safety, long-term stability of hearing improvement with CIs, and long-term effects on quality of life. The Implantateregistergesetz—IRegG provides for mandatory documentation of CIs in the future. However, the practical implementation of this law for CIs is currently not exactly predictable with regard to a concrete date. In the development of both the CI guideline and the CI white paper, it became clear that the quality criteria developed in each case can only represent the currently known scientific status. This led to the conclusion that only the establishment of a national register would allow to address scientifically relevant questions as well as to develop future quality parameters. In this respect, the development of a Germany-wide CI registry represents a consistent continuation of the future-oriented continuous development of the CI guideline and the CI white paper and thus of further quality control of CI care. The basic content of the CI registry presented here has already been developed by the DGHNO-KHC for the second version of the CI white paper in May 2021 . Therefore, on the initiative of Executive Committee of the DGHNO-KHC, a Germany-wide CI registry (German Cochlear Implant Registry = DCIR) should be established on the basis of the AWMF CI guideline and the CI white paper. For this purpose, the following goals should be achieved: Development of the legal and contractual basis for the establishment and operation of a clinical registry under the scientific direction of the DGHNO-KHC Definition of the register content based on the current CI guideline and the CI white paper Development of an evaluation standard (hospital-specific and national annual reports) Development of a DCIR logo Start of data entry and practical operation of the DCIR
Scientific basis of the DCIR The process of hearing rehabilitation with a CI in Germany is described in detail by the AWMF CI guideline and takes into account the structural quality, the process quality, and the outcome quality of the complete care process. This guideline was developed with consensus of the medical–scientific societies relevant for CI treatment, the DGHNO-KHC, the DGPP, and the DGA. This guideline thus represents a milestone in the standardization of CI treatment in Germany. On this basis, the consented practical implementation recommendations were developed under the leadership of the DGHNO-KHC and published as the CI white paper in 2021 . The CI white paper already described the main features of a Germany-wide CI registry, whose practical implementation as DCIR is described in this paper. Decision-making process for the establishment of the DCIR In parallel to the development of the CI guideline and the CI white paper for structuring the CI care process, an independent certification process for quality assurance of CI care was subsequently introduced in Germany . As early as in 2016, after intensive discussion, the Executive Committee of the DGHNO-KHC made the decision to participate in the further development of quality control of CI care in a scientifically oriented manner. For this purpose, the establishment of a national CI registry was also considered essential. After completing the necessary technical preparatory work (CI guideline and CI white paper), the decision of the Executive Committee of the DGHNO-KHC to cooperate with an external registry operator was finally made in November 2021. Various potential providers were considered, and a registry operator with great audiological expertise was sought. The DGHNO-KHC board decided to implement the DCIR in cooperation with INNOFORCE (Ruggell, Liechtenstein) as the registry operator. The implementation of the DCIR was carried out under the scientific direction of the DGHNO-KHC board. Organizational structure and legal relationships The Executive Committee of the DGHNO-KHC developed a service catalog on the basis of which the content, structure, and operation of the DCIR by the registry operator were determined. The criteria essentially included the technical implementation of a registry database including an application programming interface (API), an interface for the transfer of data from databases already existing at the hospitals, the development of a data protection concept, and the practical operation of the DCIR. The processing of the pseudonymized data for the preparation of an annual report for each participating hospital and the preparation of a national annual report for the DGHNO-KHC are also among the agreed tasks of the registry operator. For this purpose, the registry operator, as the responsible party under data protection law, concludes a participation agreement in the DCIR with each of the interested hospitals. The respective hospital receives the annual report on the data entered in each calendar year. Only pseudonymized data are transmitted to the DCIR. The tasks of the participating hospitals include informing the patients whose data are registered about the objectives and the data protection concept, as well as obtaining and documenting individual patient consent for data transfer to the DCIR (Fig. ). There is no direct contractual legal relationship between the operator of the DCIR and patients. Likewise, there is no direct legal relationship between the participating hospitals and the Executive Committee of the DGHNO-KHC with regard to the DCIR. The scientific management of the DCIR lies with the Executive Committee of the DGHNO-KHC, as do the rights of use of the anonymized national data. Data protection concept The collection of clinical data from patient care for the DCIR requires consent from each patient, even when using pseudonymized data. This consent therefore had to be obtained from the participating hospital for each patient whose data were to be registered in the DCIR. For this purpose, a model consent form was developed and made available to the participating hospitals. Data transfer from a participating hospital to the DCIR should be performed exclusively on the basis of pseudonymized data. The identification of individual patients or their data after transfer to the DCIR is therefore not possible for the registry operator or for the Executive Committee of the DGHNO-KHC. Each participating hospital should receive the data it has entered into the registry as an anonymized annual report. The annual report is thus a benchmark with which to compare the respective hospital data (e.g., number of complications) with the national overall data of the DCIR. The DGHNO-KHC Executive Committee receives an anonymized national annual report of all data without allowing conclusions to be drawn about individual hospitals or individual patients (Fig. ). The data protection concept presented was reviewed both legally and by the data protection officers of the respective hospitals before the DCIR went into operation. Technical implementation of data entry into the CI registry During the conception phase of the DCIR, a very heterogeneous starting situation became apparent with regard to the databases and documentation systems used for quality control of CI care in the hospitals. A solution that could be implemented for all hospitals willing to participate therefore had to take into account the different initial situations on the one hand and ensure homogeneous data quality of the DCIR on the other. Consequently, various technical access options for data transfer were developed and offered to the hospitals individually for use. These included (1) Internet-based data entry, (2) the use of an already existing database, or (3) the establishment of a database of the registry operator (Fig. ). Internet-based data entry For hospitals that either have not operated their own IT system for the documentation of CI-related data so far or only provide care for a few CI cases per year, the possibility of a direct Internet-based data entry for the DCIR should be possible. For this purpose, the registry operator has developed an Internet-based registry access that enables online “manual” entry of registry data. This access thus allows a hospital to participate in the DCIR even without further technical requirements, such as the establishment of a separate hospital database. Transfer of data from an existing hospital database A large number of hospitals participating in the DCIR already operate their own hospital databases or documentation systems for quality assurance of CI care. Most of these local databases can connect peripheral devices (e.g., audiometers) and import results. In order to avoid duplicate data collection, it was necessary to create the possibility to transfer the locally collected CI therapy data to the registry. For this purpose, an API has been provided by the registry operator since fall 2022. Use of the database of the registry operator The registry operator (INNOFORCE, Ruggell, Liechtenstein) has an ENT database (ENTstatistics; ). This system is used by many hospitals in Germany to document and statistically evaluate otologic, rhinologic, laryngologic, and tumor findings. In particular, ENTstatistics offers interfaces for the integration of peripheral endpoints. The system supports the documentation of therapy data required for the DCIR as well as the subsequent transfer to the DCIR. Content of the DCIR: data blocks The DCIR is primarily oriented toward the documentation of the implant or implantation. Thus, only patients who have actually received an implant will be included in the registry. The registry is purely prospective, so that implants and implantations could only be registered from the time the registry started operating (January 2022). The registry system is therefore based on the recording and documentation of parameters relevant for the assessment of implant function. These are divided into ten so-called data blocks, which are based on the current AWMF CI guideline . These include in detail: baseline data, preoperative audiometry, preoperative hearing history, implant, surgery, CI-related complications, CI use and rehabilitation progress, postoperative audiometry, hearing/language development (children), and quality of life. In addition, however, the data blocks also include the documentation of guideline-compliant CI care. This treatment process includes the preoperative phase, the operative phase, the basic therapy, the follow-up therapy, as well as the lifelong aftercare. The definition and content of the data blocks have already been integrated into the current version of the CI white paper of the DGHNO-KHC . An overview of the data blocks and their content can be found in Table and the complete list of all collected registry parameters in the attached supplement. Time of data collection The time for documentation of the individual data blocks is also based on the treatment process consented in the CI guideline and the CI white paper, which is divided into five phases: preoperative phase, operative phase, basic therapy, follow-up therapy, aftercare (Fig. ). Although there are numerous individual, hospital-specific treatment concepts that vary the temporal scope of the individual phases, there is nevertheless scientific consensus on the basic temporal allocation of these stages (Fig. ). The DCIR therefore envisions documenting at least one time point for data collection for each of the individual phases in order to map all phases of hearing rehabilitation with CI. Since individual phases, e.g., follow-up therapy, may have a different number of individual appointments depending on the patient, the number of data entries may vary significantly. In principle, the DCIR allows any number of data entry points to be documented for each supply phase. However, at least one entry must be made for each individual phase. As an orientation of time, for the care phases in adults, basic therapy can be assumed up to approx. 6 weeks postoperatively, for follow-up therapy up to approx. 1 year postoperatively, and for aftercare starting approx. 1 year postoperatively. However, individually deviating periods are possible. For children, the time periods are also significantly different. The main features of the data collection periods have already been described in the current version of the CI white paper of the DGHNO-KHC . The structure of the DCIR provides for minimum documentation for individual data blocks in each phase of the care process. By contrast, other data blocks (e.g., data block 6: complications) are only documented in the event of an incident. This approach facilitates a practicable way between documentation scope and feasible effort for the participating hospital. An overview of the mandatory and incident documentation can be found in Fig. , which also provides an indication of the time periods for each phase. Data evaluation and preparation of annual reports The registry operator creates an anonymized annual report for each participating hospital based on the data entered by the hospital (Fig. ). These data are presented in comparison to the overall national data, thus enabling a relative comparison (benchmarking) for the respective hospital. A hospital’s own data can therefore be viewed in comparison to the average data for Germany as a whole. An exemplary presentation of excerpts from a hospital-specific annual report is shown in Fig. . The registry operator additionally provides the DGHNO-KHC Executive Committee with an anonymized annual report on all data entered into the registry. Identification of individual patients or individual hospitals in relation to the data provided is not possible in the national annual report. Hospitals are only listed anonymously here, so that only anonymized hospital comparisons are possible. Only the registry operator is aware of the identity of the hospital in order to provide feedback to the facility in the event of serious anomalies. Development of a logo for the DCIR To enable recognizability of the data and publications collected on the basis of the DCIR, a registry logo was developed in cooperation between the Executive Committee of the DGHNO-KHC and the registry operator, which will be made available to all participating partners of the DCIR for internal and external communication (Fig. ).
The process of hearing rehabilitation with a CI in Germany is described in detail by the AWMF CI guideline and takes into account the structural quality, the process quality, and the outcome quality of the complete care process. This guideline was developed with consensus of the medical–scientific societies relevant for CI treatment, the DGHNO-KHC, the DGPP, and the DGA. This guideline thus represents a milestone in the standardization of CI treatment in Germany. On this basis, the consented practical implementation recommendations were developed under the leadership of the DGHNO-KHC and published as the CI white paper in 2021 . The CI white paper already described the main features of a Germany-wide CI registry, whose practical implementation as DCIR is described in this paper.
In parallel to the development of the CI guideline and the CI white paper for structuring the CI care process, an independent certification process for quality assurance of CI care was subsequently introduced in Germany . As early as in 2016, after intensive discussion, the Executive Committee of the DGHNO-KHC made the decision to participate in the further development of quality control of CI care in a scientifically oriented manner. For this purpose, the establishment of a national CI registry was also considered essential. After completing the necessary technical preparatory work (CI guideline and CI white paper), the decision of the Executive Committee of the DGHNO-KHC to cooperate with an external registry operator was finally made in November 2021. Various potential providers were considered, and a registry operator with great audiological expertise was sought. The DGHNO-KHC board decided to implement the DCIR in cooperation with INNOFORCE (Ruggell, Liechtenstein) as the registry operator. The implementation of the DCIR was carried out under the scientific direction of the DGHNO-KHC board.
The Executive Committee of the DGHNO-KHC developed a service catalog on the basis of which the content, structure, and operation of the DCIR by the registry operator were determined. The criteria essentially included the technical implementation of a registry database including an application programming interface (API), an interface for the transfer of data from databases already existing at the hospitals, the development of a data protection concept, and the practical operation of the DCIR. The processing of the pseudonymized data for the preparation of an annual report for each participating hospital and the preparation of a national annual report for the DGHNO-KHC are also among the agreed tasks of the registry operator. For this purpose, the registry operator, as the responsible party under data protection law, concludes a participation agreement in the DCIR with each of the interested hospitals. The respective hospital receives the annual report on the data entered in each calendar year. Only pseudonymized data are transmitted to the DCIR. The tasks of the participating hospitals include informing the patients whose data are registered about the objectives and the data protection concept, as well as obtaining and documenting individual patient consent for data transfer to the DCIR (Fig. ). There is no direct contractual legal relationship between the operator of the DCIR and patients. Likewise, there is no direct legal relationship between the participating hospitals and the Executive Committee of the DGHNO-KHC with regard to the DCIR. The scientific management of the DCIR lies with the Executive Committee of the DGHNO-KHC, as do the rights of use of the anonymized national data.
The collection of clinical data from patient care for the DCIR requires consent from each patient, even when using pseudonymized data. This consent therefore had to be obtained from the participating hospital for each patient whose data were to be registered in the DCIR. For this purpose, a model consent form was developed and made available to the participating hospitals. Data transfer from a participating hospital to the DCIR should be performed exclusively on the basis of pseudonymized data. The identification of individual patients or their data after transfer to the DCIR is therefore not possible for the registry operator or for the Executive Committee of the DGHNO-KHC. Each participating hospital should receive the data it has entered into the registry as an anonymized annual report. The annual report is thus a benchmark with which to compare the respective hospital data (e.g., number of complications) with the national overall data of the DCIR. The DGHNO-KHC Executive Committee receives an anonymized national annual report of all data without allowing conclusions to be drawn about individual hospitals or individual patients (Fig. ). The data protection concept presented was reviewed both legally and by the data protection officers of the respective hospitals before the DCIR went into operation.
During the conception phase of the DCIR, a very heterogeneous starting situation became apparent with regard to the databases and documentation systems used for quality control of CI care in the hospitals. A solution that could be implemented for all hospitals willing to participate therefore had to take into account the different initial situations on the one hand and ensure homogeneous data quality of the DCIR on the other. Consequently, various technical access options for data transfer were developed and offered to the hospitals individually for use. These included (1) Internet-based data entry, (2) the use of an already existing database, or (3) the establishment of a database of the registry operator (Fig. ). Internet-based data entry For hospitals that either have not operated their own IT system for the documentation of CI-related data so far or only provide care for a few CI cases per year, the possibility of a direct Internet-based data entry for the DCIR should be possible. For this purpose, the registry operator has developed an Internet-based registry access that enables online “manual” entry of registry data. This access thus allows a hospital to participate in the DCIR even without further technical requirements, such as the establishment of a separate hospital database. Transfer of data from an existing hospital database A large number of hospitals participating in the DCIR already operate their own hospital databases or documentation systems for quality assurance of CI care. Most of these local databases can connect peripheral devices (e.g., audiometers) and import results. In order to avoid duplicate data collection, it was necessary to create the possibility to transfer the locally collected CI therapy data to the registry. For this purpose, an API has been provided by the registry operator since fall 2022. Use of the database of the registry operator The registry operator (INNOFORCE, Ruggell, Liechtenstein) has an ENT database (ENTstatistics; ). This system is used by many hospitals in Germany to document and statistically evaluate otologic, rhinologic, laryngologic, and tumor findings. In particular, ENTstatistics offers interfaces for the integration of peripheral endpoints. The system supports the documentation of therapy data required for the DCIR as well as the subsequent transfer to the DCIR.
For hospitals that either have not operated their own IT system for the documentation of CI-related data so far or only provide care for a few CI cases per year, the possibility of a direct Internet-based data entry for the DCIR should be possible. For this purpose, the registry operator has developed an Internet-based registry access that enables online “manual” entry of registry data. This access thus allows a hospital to participate in the DCIR even without further technical requirements, such as the establishment of a separate hospital database.
A large number of hospitals participating in the DCIR already operate their own hospital databases or documentation systems for quality assurance of CI care. Most of these local databases can connect peripheral devices (e.g., audiometers) and import results. In order to avoid duplicate data collection, it was necessary to create the possibility to transfer the locally collected CI therapy data to the registry. For this purpose, an API has been provided by the registry operator since fall 2022.
The registry operator (INNOFORCE, Ruggell, Liechtenstein) has an ENT database (ENTstatistics; ). This system is used by many hospitals in Germany to document and statistically evaluate otologic, rhinologic, laryngologic, and tumor findings. In particular, ENTstatistics offers interfaces for the integration of peripheral endpoints. The system supports the documentation of therapy data required for the DCIR as well as the subsequent transfer to the DCIR.
The DCIR is primarily oriented toward the documentation of the implant or implantation. Thus, only patients who have actually received an implant will be included in the registry. The registry is purely prospective, so that implants and implantations could only be registered from the time the registry started operating (January 2022). The registry system is therefore based on the recording and documentation of parameters relevant for the assessment of implant function. These are divided into ten so-called data blocks, which are based on the current AWMF CI guideline . These include in detail: baseline data, preoperative audiometry, preoperative hearing history, implant, surgery, CI-related complications, CI use and rehabilitation progress, postoperative audiometry, hearing/language development (children), and quality of life. In addition, however, the data blocks also include the documentation of guideline-compliant CI care. This treatment process includes the preoperative phase, the operative phase, the basic therapy, the follow-up therapy, as well as the lifelong aftercare. The definition and content of the data blocks have already been integrated into the current version of the CI white paper of the DGHNO-KHC . An overview of the data blocks and their content can be found in Table and the complete list of all collected registry parameters in the attached supplement.
The time for documentation of the individual data blocks is also based on the treatment process consented in the CI guideline and the CI white paper, which is divided into five phases: preoperative phase, operative phase, basic therapy, follow-up therapy, aftercare (Fig. ). Although there are numerous individual, hospital-specific treatment concepts that vary the temporal scope of the individual phases, there is nevertheless scientific consensus on the basic temporal allocation of these stages (Fig. ). The DCIR therefore envisions documenting at least one time point for data collection for each of the individual phases in order to map all phases of hearing rehabilitation with CI. Since individual phases, e.g., follow-up therapy, may have a different number of individual appointments depending on the patient, the number of data entries may vary significantly. In principle, the DCIR allows any number of data entry points to be documented for each supply phase. However, at least one entry must be made for each individual phase. As an orientation of time, for the care phases in adults, basic therapy can be assumed up to approx. 6 weeks postoperatively, for follow-up therapy up to approx. 1 year postoperatively, and for aftercare starting approx. 1 year postoperatively. However, individually deviating periods are possible. For children, the time periods are also significantly different. The main features of the data collection periods have already been described in the current version of the CI white paper of the DGHNO-KHC . The structure of the DCIR provides for minimum documentation for individual data blocks in each phase of the care process. By contrast, other data blocks (e.g., data block 6: complications) are only documented in the event of an incident. This approach facilitates a practicable way between documentation scope and feasible effort for the participating hospital. An overview of the mandatory and incident documentation can be found in Fig. , which also provides an indication of the time periods for each phase.
The registry operator creates an anonymized annual report for each participating hospital based on the data entered by the hospital (Fig. ). These data are presented in comparison to the overall national data, thus enabling a relative comparison (benchmarking) for the respective hospital. A hospital’s own data can therefore be viewed in comparison to the average data for Germany as a whole. An exemplary presentation of excerpts from a hospital-specific annual report is shown in Fig. . The registry operator additionally provides the DGHNO-KHC Executive Committee with an anonymized annual report on all data entered into the registry. Identification of individual patients or individual hospitals in relation to the data provided is not possible in the national annual report. Hospitals are only listed anonymously here, so that only anonymized hospital comparisons are possible. Only the registry operator is aware of the identity of the hospital in order to provide feedback to the facility in the event of serious anomalies.
To enable recognizability of the data and publications collected on the basis of the DCIR, a registry logo was developed in cooperation between the Executive Committee of the DGHNO-KHC and the registry operator, which will be made available to all participating partners of the DCIR for internal and external communication (Fig. ).
Set-up and operation of the DCIR The practical operation of the DCIR with browser-based data entry was started in January 2022. Since January 2022, pseudonymized data can be entered into the DCIR. In the first 15 months of registry operation, more than 2500 CIs from more than 2000 patients were already successfully entered in the DCIR. It should be noted that the number of implants does not correspond to the number of patients, as patients may also have bilateral CIs. Currently, detailed data analysis is in progress, so that a content-related presentation of the collected results will take place in a separate scientific evaluation (manuscript in preparation). Automated data export The three different mechanisms for data acquisition (Fig. ) could be implemented in the meantime. The API has been available since fall 2022. This made it possible to successfully find individual solutions for the participating hospitals in order to ensure registry participation. Participating hospitals After completion of the service catalog and the subsequent start of the project, hospitals have been able to declare their willingness to participate in the registry since July 2021. In 2021, 11 hospitals signed a contract to participate in the DCIR, and in 2022, 64 hospitals. By March 2023, a total of 75 hospitals had contractually declared their participation in the DCIR. Annual reports 2022 Data entry into the DCIR for the year 2022 was successfully completed for the participating hospitals. At the time of writing, the evaluation process was taking place, and thus the completion of the annual reports for 2022 for the participating hospitals and the preparation of the national annual report for 2022 are anticipated. An evaluation and scientific publication of the content of the national annual report is currently being prepared by the Executive Committee of the DGHNO-KHC. Public availability of information on the DCIR The technical implementation of the DCIR was accompanied by the public provision of information on the objectives and content of the registry. For this purpose, the operator of the registry set up a website with free access ( https://www.ci-register.de/ ).
The practical operation of the DCIR with browser-based data entry was started in January 2022. Since January 2022, pseudonymized data can be entered into the DCIR. In the first 15 months of registry operation, more than 2500 CIs from more than 2000 patients were already successfully entered in the DCIR. It should be noted that the number of implants does not correspond to the number of patients, as patients may also have bilateral CIs. Currently, detailed data analysis is in progress, so that a content-related presentation of the collected results will take place in a separate scientific evaluation (manuscript in preparation).
The three different mechanisms for data acquisition (Fig. ) could be implemented in the meantime. The API has been available since fall 2022. This made it possible to successfully find individual solutions for the participating hospitals in order to ensure registry participation.
After completion of the service catalog and the subsequent start of the project, hospitals have been able to declare their willingness to participate in the registry since July 2021. In 2021, 11 hospitals signed a contract to participate in the DCIR, and in 2022, 64 hospitals. By March 2023, a total of 75 hospitals had contractually declared their participation in the DCIR.
Data entry into the DCIR for the year 2022 was successfully completed for the participating hospitals. At the time of writing, the evaluation process was taking place, and thus the completion of the annual reports for 2022 for the participating hospitals and the preparation of the national annual report for 2022 are anticipated. An evaluation and scientific publication of the content of the national annual report is currently being prepared by the Executive Committee of the DGHNO-KHC.
The technical implementation of the DCIR was accompanied by the public provision of information on the objectives and content of the registry. For this purpose, the operator of the registry set up a website with free access ( https://www.ci-register.de/ ).
The development, structuring, and operation of a national clinical registry is a complex task that requires considerable time and financial resources. All of the development steps for the successful operation of the DCIR were realized by the DGHNO-KHC and the registry operator on their own initiative. In addition to the definition of the registry content, the legal and contractual basis for the establishment and operation as well as the development of annual reports and a logo were successfully elaborated. The productive operation of the DCIR started at the beginning of 2022. Since the start of operations, more than 2500 implants from more than 2000 patients have been included in the DCIR in just over 1 year. It is clear that within a short period the data basis of the DCIR will have a very broad foundation to answer scientific questions as well as to derive future quality standards from it. The knowledge gained from the DCIR will thus make a significant contribution to quality control and the further development of scientifically based quality standards in CI care in the future. In principle, participation in the DCIR is possible for all hospitals that agree to the contractual requirements with the registry operator and fulfill the technical requirements for data entry. In the short time since the DCIR went live, 75 hospitals have already contractually agreed to participate in the CI registry. This is an impressive result, which illustrates the great interest of the hospitals in participating in the registry. In return, participating hospitals receive an annual report that enables benchmarking of hospital data against national averages. This report can also be used directly as a quality report for the hospital. A participating hospital thus not only makes an important active contribution to the further development of new quality standards, but also receives an immediate benefit in the form of a standardized and professional quality report. A CI quality report is also required by the current CI guideline and the CI white paper in order to make a transparent indication of the quality of care of an institution easily available to patients . Providing the technical requirements for data transfer from a hospital presented a particular challenge in the implementation of the DCIR. Thus, in order to achieve the highest possible participation in the DCIR, various technical solutions had to be developed. This included the option of browser-based individual data entry. There are no additional costs to the hospitals from the DCIR for using the API to submit data from local databases. The API functionality is already included in the basic features of the registry. However, a hospital may incur expenses to implement (e.g., programming) the export from the local database. Some hospitals have taken advantage of the data export possibility on the occasion of their wish to participate in the DCIR and have adapted their own database or implemented a new local database, e.g., ENTstatistics (INNOFORCE, Ruggell, Liechtenstein). The registry operator offered the hospitals individual solutions tailored to their specific needs. Already in the first year after the establishment of the registry, successful data exports of individual participating hospitals could be carried out in this way. Since both the financial resources and the investment possibilities diverge considerably between the hospitals, this aspect was taken into account in the conceptual design of the registry with different access options. This concept allowed hospitals to decide for themselves which level of automation of data transfer (browser-based input, transfer from existing database, or database to be newly set up) they would like to implement. The experience gained in the first year proved the basic usability of all three data entry options presented for the DCIR. Assessment of the quality of care A particular value of a clinical registry lies in the long-term assessment of the quality of care. This includes not only acute complications, but also surveys the long-term quality of outcome of a therapeutic procedure. The scandal surrounding the defective breast implants in particular impressively illustrates this aspect. As a consequence, the IRegG in the meantime has created the legal basis in Germany for the mandatory documentation of a large number of medical implants in a register. The CI is also explicitly mentioned in this law, so that in future not only a voluntary and purely scientifically oriented documentation of CI care will take place, but documentation will be mandatory by law. At present, it remains unclear why other implantable hearing systems, such as active middle ear implants, were not included in the IRegG in addition to CI. Although the exact timing of the start of mandatory documentation of CIs is currently not known, the DGHNO-KHC has already set the professional standard with the registry initiative presented here and has also started a dialogue with the state registry authorities (BfArM). Through the establishment and operation of the DCIR, subject-specific parameters were defined at an early stage and their implementation has been in practice since the beginning of 2022. This initiative can therefore be considered exemplary for other medical implants or even other medical societies. In addition, a new EU regulation for medical devices came into force in 2021, which requires implant manufacturers to provide, among other things, clinical performance data of their implants in a long-term follow-up. This regulation, known as the Medical Device Regulation (MDR; ), poses significant challenges for hospitals and implant manufacturers. It remains to be seen whether the DCIR can also make a contribution with regard to the MDR. Acceptance The CI-provision facilities (CIVE) that demonstrate the necessary structural quality, process quality, and outcome quality on the basis of the CI guideline have been able to obtain a certification in a structured process since 2021 . Across Germany, 47 hospitals have already been awarded the CIVE certificate . A prerequisite for successful certification is the commitment of the hospitals to actively participate in the DCIR. Without a corresponding commitment, the CIVE certificate cannot be awarded to a hospital. Looking at the number of hospitals that committed to participate in the DCIR alone (75 hospitals), there is a significant difference here between certified CI-provision institutions and hospitals participating exclusively in the registry. This difference could be explained from different perspectives. On the one hand, hospitals that may not yet fulfill all conditions for successful certification at the present time could nevertheless have decided to already participate in the registry. It is possible that these hospitals will apply for CIVE certification at a later date. Another explanation could be that hospitals do not seek certification but want to take advantage of registry participation. Since a structured annual report represents a benefit for these hospitals, the DCIR could also be used as a pure database for these hospitals, as they also receive an annual report and the pseudonymized raw data they entered into the registry at the end of the year. In this respect, there are multiple benefits for hospitals from participating in the registry, which already exist from the first year of registry participation. Another explanation for the high number of hospitals participating in the registry could be the pricing of the registry operator. In the first months of joining the registry, the offer was particularly attractive in financial terms. Since the start-up of the DCIR, 75 hospitals have already contractually agreed to participate in the registry. This is a very positive development for such a short period, especially considering the number of available hospitals that could participate in the registry. The exact number of institutions offering CI care is not known for Germany. In a survey conducted in 2020 by the DGHNO-KHC in collaboration with the patient self-help group ( Deutsche Cochlea-Implantat Gesellschaft , DCIG), 70 of 170 ENT hospitals existing in Germany stated that they perform CI treatment. It must be noted that the number of hospitals participating in this survey was not complete, so that the authors assume a number of approximately 100 hospitals offering CI care . Against this background, the number of hospitals (i.e., 75) that agreed to participate in the registry is even more impressive, as it obviously already corresponds to the majority of CI hospitals in Germany. This high number of participants not only demonstrates the great interest of hospitals to actively participate in the registry, but also shows the representative coverage of the clinical data available in Germany by the DCIR that can be expected in the future. Comparison with other registries In Germany, a large number of clinical registries have already been introduced very successfully in the past to collect care parameters. One example is the Trauma Registry of the Academy of Trauma Surgery (AUC; ). This registry has been in operation since 1993 and can be regarded as pioneering for medical–scientific registries that are primarily aimed at improving the quality of care. The objective of the CI registry is therefore similar to already established concepts, since the scientific approach, supported by the medical society (DGHNO-KHC), is not motivated by commercial interests, but by medical, scientific, and quality-oriented interests. With regard to the collected data, it was therefore out of the question that only anonymized data analysis would be performed. On the other hand, in order to enable realistic benchmarking, it must be possible to compare the respective hospital data with the national average data. The evaluation system presented here combines both the hospital interests (anonymous benchmarking) with the interests of the national medical society to create a national annual report. For the first time, a very large number of care processes and patients cared for is collected nationwide by this approach. International comparison As a comparison internationally, some CI registries already exist. The Swiss and the French CI registry can be mentioned as examples . In Switzerland, the registry was introduced as early as 1992 and has been operated continuously since then. In a recent paper it was shown for Europe that in 2021, CI registries were established in only four countries (approx. 10% of the countries in Europe). In this respect, the introduction of the DCIR is not the first approach to quality control of CI care in a European country. However, taking into account the parameters collected in other European CI registries, the approach of the DCIR presented here to collect relevant parameters over ten defined data blocks seems unique and innovative. The main differences between the DCIR and other registries are the recording of the entire CI care process and the lifelong follow-up, which are important quality parameters. Limitations Despite a very positive development of the DCIR so far, the methodological approach is, nevertheless, subject to some limitations, which will be discussed in the following. The main challenge in securing the long-term operation of the registry is to collect as complete data as possible from CI patients. Currently, participation in the registry is purely voluntary for hospitals, motivated by the desire to further improve the quality of CI care, the future development of new quality standards, and also the provision of an annual report for participating hospitals. There is currently no obligation to provide the data. As a point of reference for assessing the coverage of available data by the registry, the number of surgeries performed can be used on the basis of the DRG statistics provided by the Federal Statistical Office. According to this, 4359 CI operations took place in Germany in 2021. Considering the number of implants included in the DCIR in about 15 months (as of March 2023: > 2500 implants), the penetration of the registry for CI care in Germany seems to be very high even at this point in time: Approximately 50% of the implants have already been included in the registry during the short operating time of the DCIR, assuming approximately the same number of cases of care in 2021 and 2022. Even though this is a remarkable success in the initial phase of the registry, it must be critically noted that approx. 50% of the implantations are currently not yet documented. Whether this is due to individual technical difficulties that have not yet been resolved, incomplete data transfer, or hospitals that are not willing to participate cannot be conclusively assessed at this time. At the latest with the future implementation of the IRegG and its application to CI, a complete, since it is obligatory, documentation of all implantations is foreseeable. The DCIR is also carrying out important technical preliminary work in this area. Ensuring data quality will be essential for the scientific usability of the registry. This concerns both the completeness of the provided data sets and, in particular, the content plausibility check of the recorded data. Thus, the quality control of the DCIR will also be of great importance in the future. At present, data transfer to the registry requires informed consent to be obtained from the respective patient. This is a time-consuming process and there is also the possibility that a patient does not give their consent to the data transfer. Currently, the administrative burden lies with the participating hospitals. It is to be hoped that patients with CI implants will continue to give their consent to this, also on the basis of the expected results and the long-term influence on future quality standards, in order to support the DCIR. An appropriate presentation of the DCIR and the resulting data should therefore also be made available to CI patients in order to achieve appropriate support from both the patients and the patient self-help organizations. With the future introduction of the IRegG , the documentation of the collected data will then become legally mandatory and will no longer require active consent from a patient. The structure of the CI registry is based on the content of the German CI guideline and the CI white paper. In principle, the care process presented here can be applied to adults and children alike. However, it is obvious that in addition to a high degree of congruence of the parameters to be collected (e.g., technical data or demographic data), age-specific parameters must also be gathered for a large number of the variables collected. Especially with regard to the assessment of success, a multitude of challenges arise here that make direct comparability of child and adult data difficult. Currently, the registry structure is consistent with the consensus content of the CI guideline and CI white paper. It remains to be seen whether a change in data blocks or data fields will be required with the future update of the CI guideline or CI white paper. Costs and refinancing Further development of the quality of CI care expected from the registry is not only in the interest of patients and hospitals, but also in the direct interest of health costs payers, e.g., health insurances. Also in the future, recurring costs for the operation of the registry are to be expected from the participating hospitals as well as from the DGHNO-KHC. It is therefore obvious that the described initiative must find the support of the health costs payers in order to ensure the long-term operation of the DCIR. This is of particular importance, since only the long-term follow-up of implant safety, of possible complications, but also of the outcome quality of CI care can offer enormous potential for scientifically based quality control. The development of the basic principles, the structuring, and the introduction of the DCIR up to its successful start-up has been realized exclusively by the own initiative of the DGHNO-KHC as well as the participating hospitals. The financial investments required for this on the part of the DGHNO-KHC represent a considerable financial burden. Hospitals also bear annual costs for participation in the registry. Therefore, the DCIR represents a relevant financial investment both for the national medical society and for the hospitals. There is no question that the refinancing of the registry, or its long-term operation, requires the financial support of the health costs payers.
A particular value of a clinical registry lies in the long-term assessment of the quality of care. This includes not only acute complications, but also surveys the long-term quality of outcome of a therapeutic procedure. The scandal surrounding the defective breast implants in particular impressively illustrates this aspect. As a consequence, the IRegG in the meantime has created the legal basis in Germany for the mandatory documentation of a large number of medical implants in a register. The CI is also explicitly mentioned in this law, so that in future not only a voluntary and purely scientifically oriented documentation of CI care will take place, but documentation will be mandatory by law. At present, it remains unclear why other implantable hearing systems, such as active middle ear implants, were not included in the IRegG in addition to CI. Although the exact timing of the start of mandatory documentation of CIs is currently not known, the DGHNO-KHC has already set the professional standard with the registry initiative presented here and has also started a dialogue with the state registry authorities (BfArM). Through the establishment and operation of the DCIR, subject-specific parameters were defined at an early stage and their implementation has been in practice since the beginning of 2022. This initiative can therefore be considered exemplary for other medical implants or even other medical societies. In addition, a new EU regulation for medical devices came into force in 2021, which requires implant manufacturers to provide, among other things, clinical performance data of their implants in a long-term follow-up. This regulation, known as the Medical Device Regulation (MDR; ), poses significant challenges for hospitals and implant manufacturers. It remains to be seen whether the DCIR can also make a contribution with regard to the MDR.
The CI-provision facilities (CIVE) that demonstrate the necessary structural quality, process quality, and outcome quality on the basis of the CI guideline have been able to obtain a certification in a structured process since 2021 . Across Germany, 47 hospitals have already been awarded the CIVE certificate . A prerequisite for successful certification is the commitment of the hospitals to actively participate in the DCIR. Without a corresponding commitment, the CIVE certificate cannot be awarded to a hospital. Looking at the number of hospitals that committed to participate in the DCIR alone (75 hospitals), there is a significant difference here between certified CI-provision institutions and hospitals participating exclusively in the registry. This difference could be explained from different perspectives. On the one hand, hospitals that may not yet fulfill all conditions for successful certification at the present time could nevertheless have decided to already participate in the registry. It is possible that these hospitals will apply for CIVE certification at a later date. Another explanation could be that hospitals do not seek certification but want to take advantage of registry participation. Since a structured annual report represents a benefit for these hospitals, the DCIR could also be used as a pure database for these hospitals, as they also receive an annual report and the pseudonymized raw data they entered into the registry at the end of the year. In this respect, there are multiple benefits for hospitals from participating in the registry, which already exist from the first year of registry participation. Another explanation for the high number of hospitals participating in the registry could be the pricing of the registry operator. In the first months of joining the registry, the offer was particularly attractive in financial terms. Since the start-up of the DCIR, 75 hospitals have already contractually agreed to participate in the registry. This is a very positive development for such a short period, especially considering the number of available hospitals that could participate in the registry. The exact number of institutions offering CI care is not known for Germany. In a survey conducted in 2020 by the DGHNO-KHC in collaboration with the patient self-help group ( Deutsche Cochlea-Implantat Gesellschaft , DCIG), 70 of 170 ENT hospitals existing in Germany stated that they perform CI treatment. It must be noted that the number of hospitals participating in this survey was not complete, so that the authors assume a number of approximately 100 hospitals offering CI care . Against this background, the number of hospitals (i.e., 75) that agreed to participate in the registry is even more impressive, as it obviously already corresponds to the majority of CI hospitals in Germany. This high number of participants not only demonstrates the great interest of hospitals to actively participate in the registry, but also shows the representative coverage of the clinical data available in Germany by the DCIR that can be expected in the future.
In Germany, a large number of clinical registries have already been introduced very successfully in the past to collect care parameters. One example is the Trauma Registry of the Academy of Trauma Surgery (AUC; ). This registry has been in operation since 1993 and can be regarded as pioneering for medical–scientific registries that are primarily aimed at improving the quality of care. The objective of the CI registry is therefore similar to already established concepts, since the scientific approach, supported by the medical society (DGHNO-KHC), is not motivated by commercial interests, but by medical, scientific, and quality-oriented interests. With regard to the collected data, it was therefore out of the question that only anonymized data analysis would be performed. On the other hand, in order to enable realistic benchmarking, it must be possible to compare the respective hospital data with the national average data. The evaluation system presented here combines both the hospital interests (anonymous benchmarking) with the interests of the national medical society to create a national annual report. For the first time, a very large number of care processes and patients cared for is collected nationwide by this approach.
As a comparison internationally, some CI registries already exist. The Swiss and the French CI registry can be mentioned as examples . In Switzerland, the registry was introduced as early as 1992 and has been operated continuously since then. In a recent paper it was shown for Europe that in 2021, CI registries were established in only four countries (approx. 10% of the countries in Europe). In this respect, the introduction of the DCIR is not the first approach to quality control of CI care in a European country. However, taking into account the parameters collected in other European CI registries, the approach of the DCIR presented here to collect relevant parameters over ten defined data blocks seems unique and innovative. The main differences between the DCIR and other registries are the recording of the entire CI care process and the lifelong follow-up, which are important quality parameters.
Despite a very positive development of the DCIR so far, the methodological approach is, nevertheless, subject to some limitations, which will be discussed in the following. The main challenge in securing the long-term operation of the registry is to collect as complete data as possible from CI patients. Currently, participation in the registry is purely voluntary for hospitals, motivated by the desire to further improve the quality of CI care, the future development of new quality standards, and also the provision of an annual report for participating hospitals. There is currently no obligation to provide the data. As a point of reference for assessing the coverage of available data by the registry, the number of surgeries performed can be used on the basis of the DRG statistics provided by the Federal Statistical Office. According to this, 4359 CI operations took place in Germany in 2021. Considering the number of implants included in the DCIR in about 15 months (as of March 2023: > 2500 implants), the penetration of the registry for CI care in Germany seems to be very high even at this point in time: Approximately 50% of the implants have already been included in the registry during the short operating time of the DCIR, assuming approximately the same number of cases of care in 2021 and 2022. Even though this is a remarkable success in the initial phase of the registry, it must be critically noted that approx. 50% of the implantations are currently not yet documented. Whether this is due to individual technical difficulties that have not yet been resolved, incomplete data transfer, or hospitals that are not willing to participate cannot be conclusively assessed at this time. At the latest with the future implementation of the IRegG and its application to CI, a complete, since it is obligatory, documentation of all implantations is foreseeable. The DCIR is also carrying out important technical preliminary work in this area. Ensuring data quality will be essential for the scientific usability of the registry. This concerns both the completeness of the provided data sets and, in particular, the content plausibility check of the recorded data. Thus, the quality control of the DCIR will also be of great importance in the future. At present, data transfer to the registry requires informed consent to be obtained from the respective patient. This is a time-consuming process and there is also the possibility that a patient does not give their consent to the data transfer. Currently, the administrative burden lies with the participating hospitals. It is to be hoped that patients with CI implants will continue to give their consent to this, also on the basis of the expected results and the long-term influence on future quality standards, in order to support the DCIR. An appropriate presentation of the DCIR and the resulting data should therefore also be made available to CI patients in order to achieve appropriate support from both the patients and the patient self-help organizations. With the future introduction of the IRegG , the documentation of the collected data will then become legally mandatory and will no longer require active consent from a patient. The structure of the CI registry is based on the content of the German CI guideline and the CI white paper. In principle, the care process presented here can be applied to adults and children alike. However, it is obvious that in addition to a high degree of congruence of the parameters to be collected (e.g., technical data or demographic data), age-specific parameters must also be gathered for a large number of the variables collected. Especially with regard to the assessment of success, a multitude of challenges arise here that make direct comparability of child and adult data difficult. Currently, the registry structure is consistent with the consensus content of the CI guideline and CI white paper. It remains to be seen whether a change in data blocks or data fields will be required with the future update of the CI guideline or CI white paper.
Further development of the quality of CI care expected from the registry is not only in the interest of patients and hospitals, but also in the direct interest of health costs payers, e.g., health insurances. Also in the future, recurring costs for the operation of the registry are to be expected from the participating hospitals as well as from the DGHNO-KHC. It is therefore obvious that the described initiative must find the support of the health costs payers in order to ensure the long-term operation of the DCIR. This is of particular importance, since only the long-term follow-up of implant safety, of possible complications, but also of the outcome quality of CI care can offer enormous potential for scientifically based quality control. The development of the basic principles, the structuring, and the introduction of the DCIR up to its successful start-up has been realized exclusively by the own initiative of the DGHNO-KHC as well as the participating hospitals. The financial investments required for this on the part of the DGHNO-KHC represent a considerable financial burden. Hospitals also bear annual costs for participation in the registry. Therefore, the DCIR represents a relevant financial investment both for the national medical society and for the hospitals. There is no question that the refinancing of the registry, or its long-term operation, requires the financial support of the health costs payers.
The work presented here describes the structuring, development, and successful establishment of the German Cochlear Implant Registry (DCIR). By implementing the preliminary work done in the national CI guideline and the CI white paper regarding the parameters relevant to the structure, process, and outcome quality, a consistent transfer of this content to the DCIR was achieved. After the introduction of certification for CI-provision institutions , the introduction of the DCIR represents another essential milestone for the future science-based quality control of CI care in Germany. The initiative of the German Society of Otorhinolaryngology, Head and Neck Surgery (DGHNO-KHC) described here, supported by the participating hospitals, provides active quality assurance in the interest of the patients and at the same time scientific work. The registry can therefore be considered exemplary for other areas of medical care and thus also sets internationally visible standards.
Data blocks of the DCIR
|
A Flat Reconstruction of the Medial Collateral Ligament and Anteromedial Structures Restores Native Knee Kinematics: A Biomechanical Robotic Investigation | 1b4f3a8d-a42c-46fc-ad0c-19dd25a8a34a | 11542325 | Robotic Surgical Procedures[mh] | Eight unpaired fresh-frozen cadaveric knee specimens (mean age, 70.1 ± 9.5 years; 5 male, 3 female) without previous knee injury, surgery, or high-grade osteoarthritis were obtained from MedCure. The study was performed with permission from the institutional review board of the University of Münster (reference No. 2020-181-f-S). Specimens were stored at –20°C and thawed for 24 hours at room temperature before preparation. The skin and subcutaneous tissues were resected, leaving fascia and muscles intact. The sartorius fascia and hamstring tendons were resected from their tibial insertion, leaving the anteromedial retinaculum intact. The tibia and femur were cut 200 mm above and below the joint line and secured in aluminum cylinders with 3-component polyurethane resin bone cement (RenCast; Gößl & Pfaff). The fibula was then cut 100 mm distal to the proximal tibiofibular joint and transfixed with a 3.5-mm cortical screw to the tibia. Specimens were wrapped in tissue paper soaked with water to prevent drying. Robotic Test Setup A validated setup consisting of a 6 degrees of freedom industrial robot (KR 60-3; KUKA Robotics) equipped with a force-torque sensor (FTI Theta; ATI Industrial Automation) was used for biomechanical testing in this study. , , The robotic system allows for displacement-controlled positioning with a repeatability of ±0.06 mm. The force-torque sensor allows for a precision of ±0.25 N and ±0.05 N·m in the force-controlled positioning. Using the custom software simVITRO (Cleveland Clinic BioRobotics Laboratory), we optimized the test system for the simulation and acquisition of knee joint movements. A tactile measuring arm (Absolute Arm 8320-7; Hexagon Metrology GmbH) with an accuracy of 0.05 mm was utilized to define landmarks on the distal femur, tibia plateau and the shaft of the femur and tibia from which a modified Grood and Suntay coordinate system was defined. , Data acquisition was performed with a sampling rate of 500 Hz. Each specimen was preconditioned by flexing and extending the knee 10 times. After neutralizing all forces and torques acting on the knee in full extension, the passive path was determined by flexing each knee from full extension to 90° of flexion, while minimizing forces (<1 N) and torques (0.1 N·m) in all axes aside from the flexion-extension axis. An axial compression force of 50 N was applied to keep the femur and tibia in contact during the passive path. For determination of the knee kinematics, a force-controlled testing protocol was performed, meaning that displacements in response to given forces/torques were recorded. At 0°, 30°, 60°, and 90° of flexion, the following test protocols were performed under axial compression of 200 N (simulating partial weightbearing during rehabilitation): 8 N·m valgus angulation, 5 N·m internal tibial rotation torque, 5 N·m ER torque, 89 N anterior tibial translation (ATT) force, and 89 N ATT force under 5 N·m ER torque, simulating the AMRI test (Slocum test, presented in millimeters of ATT; referred to as anteromedial translation in the following text). , Sequential Cutting and Reconstruction Protocol After acquiring the native knee joint kinematics, the sMCL was released from its tibial insertion and resected over its full length, while keeping the dMCL and AMC intact. In the following step, the dMCL and overlying anteromedial retinaculum were resected. The flat reconstruction of the sMCL and anteromedial corner was performed according to a previously described technique, with slight modifications (use of cannulated chisels and creation of bone tunnels in 20° of flexion). First, the previously harvested semitendinosus tendon was partially incised longitudinally and flattened using a raspatorium to produce a flat graft. The length of the graft was trimmed to 24 cm. The flattened tendon was doubled over the loop of an adjustable cortical button (FairFix; Medacta), so that one-third of the length was available for the anteromedial limb and two-thirds for the sMCL reconstruction. After sizing of the flat graft, a 2.0-mm K-wire was drilled through the center of the medial femoral epicondyle and overdrilled with a 4.5-mm cannulated drill. Next, a flat femoral bone tunnel was created using a cannulated chisel (Medacta) based on the graft size to a depth of 20 mm ( ). The femoral bone socket was oriented parallel to the joint line in 20° of flexion, to best simulate the angulation of the femoral attachment of the sMCL. The adjustable button was shuttled through the 4.5-mm tunnel, and the graft was pulled into the femoral tunnel to a depth of 10 mm by shortening the pulley system of the button. Next, the tibial fixation was performed using 4 all-suture anchors (MectaLock; Medacta). The anchors for fixation of the sMCL limb of the graft were placed at the anterior and posterior borders of the anatomic tibial sMCL insertion site. For the anteromedial limb, aiming to mimic the combined function of the dMCL and anteromedial retinaculum, the first anchor was placed 2 cm distal to the joint line and immediately in front of the sMCL limb. The second anchor was placed 20 mm anterior to the first anchor. The graft was sutured in a modified Krackow technique and fixed to the bone using the No. 2 suture of each anchor. Final tensioning was performed in 20° of flexion, by tightening the femoral adjustable button. Finally, the finished reconstruction was sutured to the posteromedial capsule and anteromedial retinaculum with No. 2 sutures (PowerSuture; Medacta) ( ). Statistical Analysis Extraction of knee kinematics from the raw data of simVITRO was performed using MATLAB (Version R2020a; MathWorks). Statistical analysis was performed using Prism (Version 10; GraphPad Software). The data were found to be normally distributed, utilizing histograms and the Shapiro-Wilk test. Means of single groups are presented with standard deviations. Mixed linear models with Geisser-Greenhouse correction were used to assess the main effects and interactions of each independent variable (cutting state and flexion angle). The dependent variables were valgus rotation (in degrees), ER (in degrees), ATT (in millimeters), and anteromedial translation (in millimeters). Pairwise comparisons with Dunn correction were used to compare the contribution of the states at different flexion angles. Multiple comparisons were performed against the native state, to refrain from unnecessary multiple comparisons. A P value <.05 was deemed to identify significant differences. Differences between means are presented as mean differences with corresponding 95% confidence intervals. An a priori power analysis was performed using G*Power (Version 3.1). Based on means and standard deviations from a previous study on knee laxity, it was determined that a sample size of 8 knees would allow the identification of changes in translation/rotation of 2.0 ± 1.7 mm/deg (effect size, 1.2), with 80% power, at the significance level of P < .05. In total, 8 knees were used for final analysis because no specimen had to be excluded after testing.
A validated setup consisting of a 6 degrees of freedom industrial robot (KR 60-3; KUKA Robotics) equipped with a force-torque sensor (FTI Theta; ATI Industrial Automation) was used for biomechanical testing in this study. , , The robotic system allows for displacement-controlled positioning with a repeatability of ±0.06 mm. The force-torque sensor allows for a precision of ±0.25 N and ±0.05 N·m in the force-controlled positioning. Using the custom software simVITRO (Cleveland Clinic BioRobotics Laboratory), we optimized the test system for the simulation and acquisition of knee joint movements. A tactile measuring arm (Absolute Arm 8320-7; Hexagon Metrology GmbH) with an accuracy of 0.05 mm was utilized to define landmarks on the distal femur, tibia plateau and the shaft of the femur and tibia from which a modified Grood and Suntay coordinate system was defined. , Data acquisition was performed with a sampling rate of 500 Hz. Each specimen was preconditioned by flexing and extending the knee 10 times. After neutralizing all forces and torques acting on the knee in full extension, the passive path was determined by flexing each knee from full extension to 90° of flexion, while minimizing forces (<1 N) and torques (0.1 N·m) in all axes aside from the flexion-extension axis. An axial compression force of 50 N was applied to keep the femur and tibia in contact during the passive path. For determination of the knee kinematics, a force-controlled testing protocol was performed, meaning that displacements in response to given forces/torques were recorded. At 0°, 30°, 60°, and 90° of flexion, the following test protocols were performed under axial compression of 200 N (simulating partial weightbearing during rehabilitation): 8 N·m valgus angulation, 5 N·m internal tibial rotation torque, 5 N·m ER torque, 89 N anterior tibial translation (ATT) force, and 89 N ATT force under 5 N·m ER torque, simulating the AMRI test (Slocum test, presented in millimeters of ATT; referred to as anteromedial translation in the following text). ,
After acquiring the native knee joint kinematics, the sMCL was released from its tibial insertion and resected over its full length, while keeping the dMCL and AMC intact. In the following step, the dMCL and overlying anteromedial retinaculum were resected. The flat reconstruction of the sMCL and anteromedial corner was performed according to a previously described technique, with slight modifications (use of cannulated chisels and creation of bone tunnels in 20° of flexion). First, the previously harvested semitendinosus tendon was partially incised longitudinally and flattened using a raspatorium to produce a flat graft. The length of the graft was trimmed to 24 cm. The flattened tendon was doubled over the loop of an adjustable cortical button (FairFix; Medacta), so that one-third of the length was available for the anteromedial limb and two-thirds for the sMCL reconstruction. After sizing of the flat graft, a 2.0-mm K-wire was drilled through the center of the medial femoral epicondyle and overdrilled with a 4.5-mm cannulated drill. Next, a flat femoral bone tunnel was created using a cannulated chisel (Medacta) based on the graft size to a depth of 20 mm ( ). The femoral bone socket was oriented parallel to the joint line in 20° of flexion, to best simulate the angulation of the femoral attachment of the sMCL. The adjustable button was shuttled through the 4.5-mm tunnel, and the graft was pulled into the femoral tunnel to a depth of 10 mm by shortening the pulley system of the button. Next, the tibial fixation was performed using 4 all-suture anchors (MectaLock; Medacta). The anchors for fixation of the sMCL limb of the graft were placed at the anterior and posterior borders of the anatomic tibial sMCL insertion site. For the anteromedial limb, aiming to mimic the combined function of the dMCL and anteromedial retinaculum, the first anchor was placed 2 cm distal to the joint line and immediately in front of the sMCL limb. The second anchor was placed 20 mm anterior to the first anchor. The graft was sutured in a modified Krackow technique and fixed to the bone using the No. 2 suture of each anchor. Final tensioning was performed in 20° of flexion, by tightening the femoral adjustable button. Finally, the finished reconstruction was sutured to the posteromedial capsule and anteromedial retinaculum with No. 2 sutures (PowerSuture; Medacta) ( ).
Extraction of knee kinematics from the raw data of simVITRO was performed using MATLAB (Version R2020a; MathWorks). Statistical analysis was performed using Prism (Version 10; GraphPad Software). The data were found to be normally distributed, utilizing histograms and the Shapiro-Wilk test. Means of single groups are presented with standard deviations. Mixed linear models with Geisser-Greenhouse correction were used to assess the main effects and interactions of each independent variable (cutting state and flexion angle). The dependent variables were valgus rotation (in degrees), ER (in degrees), ATT (in millimeters), and anteromedial translation (in millimeters). Pairwise comparisons with Dunn correction were used to compare the contribution of the states at different flexion angles. Multiple comparisons were performed against the native state, to refrain from unnecessary multiple comparisons. A P value <.05 was deemed to identify significant differences. Differences between means are presented as mean differences with corresponding 95% confidence intervals. An a priori power analysis was performed using G*Power (Version 3.1). Based on means and standard deviations from a previous study on knee laxity, it was determined that a sample size of 8 knees would allow the identification of changes in translation/rotation of 2.0 ± 1.7 mm/deg (effect size, 1.2), with 80% power, at the significance level of P < .05. In total, 8 knees were used for final analysis because no specimen had to be excluded after testing.
Group means for the different performed movements are found in Appendix Table A1 (available in the online version of this article). Valgus Rotation Cutting of the sMCL led to a significant increase in valgus rotation in all tested flexion angles ( P < .05). Subsequent cutting of the dMCL led to a further increase in valgus rotation in all flexion angles ( P < .05). The flat reconstruction of the sMCL and anteromedial corner reduced the valgus rotation to values not significantly different from those of the native knee ( ). External Tibial Rotation Cutting of the sMCL led to a significant increase in ER in all tested flexion angles ( P < .05). Subsequent cutting of the dMCL led to a further increase in ER in all flexion angles ( P < .05). The flat reconstruction of the sMCL and anteromedial corner reduced ER to values not significantly different from those of the native knee ( ). Anterior Tibial Translation Cutting of the sMCL led to a significant increase in ATT in 0°, 30°, and 90° of flexion ( P < .05). Subsequent cutting of the dMCL led to a further significant increase in ATT in all flexion angles ( P < .05). The flat reconstruction of the sMCL and anteromedial corner reduced ATT to values not significantly different from those of the native knee ( ). Anteromedial Tibial Translation (Slocum test) Cutting of the sMCL led to a significant increase in ATT in all flexion angles ( P < .05) ( ). Subsequent cutting of the dMCL led to a further significant increase in anteromedial tibial translation in all flexion angles (all P < .05). The flat reconstruction of the sMCL and anteromedial corner reduced the anteromedial tibial translation to values not significantly different from those of the native knee.
Cutting of the sMCL led to a significant increase in valgus rotation in all tested flexion angles ( P < .05). Subsequent cutting of the dMCL led to a further increase in valgus rotation in all flexion angles ( P < .05). The flat reconstruction of the sMCL and anteromedial corner reduced the valgus rotation to values not significantly different from those of the native knee ( ).
Cutting of the sMCL led to a significant increase in ER in all tested flexion angles ( P < .05). Subsequent cutting of the dMCL led to a further increase in ER in all flexion angles ( P < .05). The flat reconstruction of the sMCL and anteromedial corner reduced ER to values not significantly different from those of the native knee ( ).
Cutting of the sMCL led to a significant increase in ATT in 0°, 30°, and 90° of flexion ( P < .05). Subsequent cutting of the dMCL led to a further significant increase in ATT in all flexion angles ( P < .05). The flat reconstruction of the sMCL and anteromedial corner reduced ATT to values not significantly different from those of the native knee ( ).
Cutting of the sMCL led to a significant increase in ATT in all flexion angles ( P < .05) ( ). Subsequent cutting of the dMCL led to a further significant increase in anteromedial tibial translation in all flexion angles (all P < .05). The flat reconstruction of the sMCL and anteromedial corner reduced the anteromedial tibial translation to values not significantly different from those of the native knee.
The most important finding of this study was that a flat reconstruction of the sMCL and anteromedial structures using a semitendinosus graft was able to restore native knee kinematics in knees with a deficient sMCL and dMCL in a cadaveric model. Furthermore, it was found that both an isolated, but more pronounced, insufficiency of the sMCL and a combined lesion of the sMCL and dMCL led to excess laxity in valgus rotation, ER, ATT, and anteromedial tibial rotation, even in ACL-intact knees. Several biomechanical studies have investigated the restraining effect of structures on the medial side of the knee. While it was originally theorized that AMRI, marked by increased ATT as well as ER, is caused by a deficiency of the posterior oblique ligament, recent studies have found that the posterior oblique ligament has a negligible role in controlling AMRI. In contrast, the ACL, sMCL, and anteromedial structures (dMCL and AMC) have been highlighted as the crucial structures preventing excess AMRI. , , , In a recent robotic biomechanical study utilizing the principle of superposition to determine the contribution of a structure on restraining knee movements, the ACL was found to be the primary restraint to AMRI between full extension and 30° of flexion. From 60° upward, the sMCL became the main contributor, restraining up to 36.8% of the AMRI. The dMCL and AMC resisted up to 23.1% of anteromedial translation, indicating a secondary role in controlling AMRI. On the other hand, the POL was not found to be a significant restrictor of AMRI. These findings are supported by other robotic biomechanical studies in human cadaveric knees, which found cutting of the dMCL led to markedly less AMRI compared with cutting of the sMCL, which was cut last in the sectioning protocol. , , In the present study, sectioning of the dMCL showed a comparably larger effect on knee instability, in comparison with the sMCL, which is in contrast to the previous studies. This observation is probably due to the fact that it was the last structure cut. Other studies have previously shown that with robotic force-controlled testing, the last structure to be cut typically leads to a major increase in instability. This underlines that both structures have to be deficient to lead to a large AMRI. The present study is of relevance given the implications for clinical practice. Several techniques for reconstruction of the medial aspect of the knee have been described. These range from anteromedial tenodeses to single-bundle reconstructions , to combined reconstructions of the sMCL with either the anteromedial aspect , or posteromedial aspect. , , Furthermore, an isometric single-bundle reconstruction has been described. In previous biomechanical studies, it was found that isolated reconstructions of the sMCL are not able to fully restore native knee kinematics. , A robotic biomechanical study on human cadaveric knee specimens showed that a single-bundle sMCL reconstruction was not able to restore anteromedial translation and ER and valgus rotation if the sMCL and dMCL were cut. In this study, reconstruction of the sMCL with a flat graft improved knee kinematics in comparison with a round graft. However, only an additional anteromedial reconstruction using a semitendinosus graft was able to fully restore the native knee kinematics. The finding of the aforementioned study were verified by a subsequent study, stating that in case of an anteromedial instability, a reconstruction of the sMCL and POL does not restore native knee kinematics, but only a combination of sMCL and dMCL. Furthermore, an isometric reconstruction of the anteromedial corner of the knee, utilizing high-strength tape, was described with promising biomechanical results. , The present study underlines these previous findings, in that the presented flat reconstruction of the sMCL and anteromedial aspect of the knee are able to restore native knee kinematics and might therefore lead to favorable outcomes for patients with instabilities on the medial side of the knee. Furthermore, a previous study highlighted that different zones (anterior, intermediate, and posterior third) inside the sMCL perform different functions. Through use of a flat reconstruction, the whole surface of the sMCL can be reconstructed, ideally replicating the native anatomy without sacrificing strength of the tendon graft. Further advantages of the flat reconstruction might be improved tendon-to-bone healing because of the optimized tendon surface inside the bone tunnel. , Future studies might investigate these possible biological advantages and compare the flat reconstruction against medial reconstruction techniques utilizing round tendon grafts. This study is not without limitations. As with all biomechanical studies, this is a time-zero study not accounting for the postoperative healing process of the reconstruction. Cadaveric knee specimens of older age (mean age, 70.1 years) were used, which might not necessarily reflect the clinical reality of bone and soft tissue quality. For tibial fixation, knotless suture anchors were used in this study. However, numerous other fixation devices, including knotless anchors or staples are available to create flat grafts. , How these fixation modalities might influence the performance of the described technique could not be evaluated. A major limitation of this study was that no second reconstruction technique was performed as a comparison to the flat reconstruction. The rationale for this was that creation of a flat femoral tunnel, as well as placing multiple suture anchors in the proximal tibia, might compromise the fixation of a subsequently performed round reconstruction toward inferior results. In the present study, the ACL was left intact, even though injuries to the medial side frequently occur concomitantly with ACL injuries. The effect of cutting the medial structures in the ACL-injured state was previously investigated. , In the present study, the ACL was left intact to simulate an optimal ACL reconstruction, as in previous studies. Leaving the ACL intact was done to reduce the confounding effect of different ACL reconstruction techniques on the results. Finally, the flat reconstruction showed no significant differences from the native state. This lack of a difference, however, does not mean that the reconstructed and native states are equivalent, because this study was only powered to test for differences between states, but underpowered to determine the equivalence of states. However, means of the reconstructed and native states are similar (see and ), so that differences between the groups that might show with a higher sample size might be small and of questionable clinical relevance.
Insufficiency of the sMCL and dMCL led to excess valgus rotation, ER, ATT, and anteromedial tibial translation. A combined flat reconstruction of the sMCL and the anteromedial aspect restored this excess laxity to values not significantly different from those of the native knee.
sj-docx-1-ajs-10.1177_03635465241280984 – Supplemental material for A Flat Reconstruction of the Medial Collateral Ligament and Anteromedial Structures Restores Native Knee Kinematics: A Biomechanical Robotic Investigation Supplemental material, sj-docx-1-ajs-10.1177_03635465241280984 for A Flat Reconstruction of the Medial Collateral Ligament and Anteromedial Structures Restores Native Knee Kinematics: A Biomechanical Robotic Investigation by Adrian Deichsel, Christian Peez, Michael J. Raschke, Alina Albert, Mirco Herbort, Christoph Kittl, Christian Fink and Elmar Herbst in The American Journal of Sports Medicine
|
AI-enabled | 7327a0d1-b773-4ac4-92e0-470ecbe49c38 | 9046239 | Anatomy[mh] | Cellular morphology is closely linked to tissue function and disease diagnosis. A common tool in pathology for assisting with disease diagnosis is immunohistochemical (IHC) staining, which is used to identify specific proteins of interest in a tissue. In this work, we propose to use deep learning to computationally generate in silico IHC staining. We demonstrate that deep learning algorithms can identify subtle features in cellular morphology, which are associated with diseases and previously required IHC to visualize. These results also open the door for computational approaches to potentially reduce the need of performing time-consuming or expensive experimental IHC staining. In the standard workflow, tissue samples collected for pathological diagnosis have a section prepared with a hematoxylin and eosin (H&E) stain for general histologic assessment. Specialized IHC stains are additionally applied to other sections of the same tissue to identify structures or specific molecules that are difficult to directly observe in the H&E-stained sample. These IHC-stained slides are commonly studied by eye under a microscope. Digitization of these slides into whole-slide images (WSIs) now allows for computational assistance with evaluating slides ( ; ; ). Performing all necessary IHC stains on a sample can cost hundreds of dollars and requires several days to process, which can be avoided by using the in silico IHC. Additionally, the computationally generated stain would allow the IHC to be run on the same section of tissue as the original H&E-stained slide, rather than a different section, and removes artifacts that appear on the real IHC stains. In addition to these advantages in the diagnostic setting, in silico IHC has the potential to make major contributions to genomic research that relies on IHC-generated phenotypes. For example, large genetic association studies of Alzheimer's disease (AD) neuropathologic endophenotypes have been severely limited by the lack of IHC data on research autopsy brains ( ). Recent advances in deep learning have produced increasingly accurate image recognition models ( ; ; ). These advances have resulted in deep learning being applied across medicine fields ( ; ). Within pathology, deep learning has been used to classify disease subtypes and predict mutations ( ; ; ; ) and to interpret IHC stains ( ; ). Combined with spatial transcriptomics, deep learning has also been used to link cell morphologic features with localized gene expressions ( ; ). Finally, deep learning has been used to transform unstained samples into virtual H&E stains ( ) and to label cellular constituents, such as the nuclei and membrane, from microscopy images ( ). H&E and IHC are not commonly prepared on the same tissue section, making supervised learning more difficult. As a result, computationally generating IHC stains directly from H&E images has been less explored. Studies that have used IHC slides have typically focused on IHC targeting specific cell types ( ; ; ; ), such as neoplastic or necrotic cells, which are more visually distinct on H&E slides than the AD lesions we studied. We present in silico IHC, a system for our in silico IHC staining process ( ). As a proof of concept, we apply in silico IHC to AD. The brains of patients with AD have several hallmark neuropathologic lesions: β-amyloid (Aβ) plaques, neurofibrillary tangles (NFT), and neuritic plaques ( ; ). These hallmark changes typically occur in specific regions of the brain before the onset of cognitive impairment, and then increase in density and distribution as the disease advances. IHC staining is used to highlight instances of each of these hallmark changes. IHC for Aβ is used to highlight instances of Aβ plaques, and IHC for pathologic forms of tau (often collectively called phospho-tau) are used to highlight instances of NFTs and NPs, which can be differentiated via visual inspection. The IHC assessments are used in consensus pathologic evaluation to determine the regional distribution of Aβ plaques, regional distribution of NFTs, and regional density of NPs. Together, these form the basis of the current National Institutes on Aging–Alzheimer's Association consensus guidelines for the neuropathologic assessment of AD ( ; ). To train and evaluate our system, we collected a dataset consisting of brain autopsies from a total of 160 patients, consisting of 704 samples from different regions of the brain. Within a single patient, the presence of hallmark changes may vary from region to region, resulting in the need for multiple samples for each patient. Each sample is divided into sections for staining. One section from all samples is stained with H&E combined with Luxol fast blue (LFB), which is commonly used to highlight the white matter in the brain. This combined stain is referred to as H&E-LFB. Additionally, separate sections from the sample are prepared with Aβ and pathologic tau IHC stains. The regions and stains used in our study follow the recommendations of the National Institute on Aging–Alzheimer's Association guidelines for assessing neuropathologic change in AD ( ; ). We then divide the dataset into separate training, validation, and test sets by patient. Training a deep learning model requires a dataset consisting of pairs of input and expected output (e.g., H&E-LFB images and presence of each hallmark change). Typically, the expected output for the dataset is generated by manual annotation ( ) or by combining slide-level annotations and multiple-instance learning ( ; ). However, we can computationally align the serial slides prepared with different stains to provide annotation for the H&E-LFB images. This approach reduces the need for manual annotation. After training, in silico IHC achieved areas under the receiver operating characteristic curve (AUROCs) of 0.91 (95% CI, 0.88–0.95) for classifying the presence of NFTs, 0.92 (0.87–0.94) for neuritic plaques, and 0.88 (95% CI, 0.82–0.93) for Aβ plaques on the held-out test set.
Data curation Our dataset consists of autopsied brains from 160 patients. Each brain is divided into several regions of interest, resulting in 704 samples collected from multiple regions of brain ( A). The tissue sample from each region is prepared into a formalin-fixed paraffin-embedded block. These blocks are cut into serial sections of five um thickness, with one slide prepared with H&E-LFB staining, along with at least one of pathologic tau and Aβ staining. We split the dataset into a training set consisting of 91 patients, a validation of 20 patients, and a testing set of 19 patients ( B). As an additional test, data for 30 consecutive patients were collected at a different time with all sections necessary for a full analysis of the level of neuropathologic change in each brain ( C). In silico staining deep learning model In silico IHC uses a trained neural network that takes an H&E-LFB-stained WSI as input and generates synthetic IHC-stained images predicting the presence of NFTs, Aβ plaques, and neuritic plaques ( ). From the WSI, we select non-overlapping patches of 2,048 × 2,048 pixels, corresponding with 517 μm × 517 μm. Each patch is given to the trained neural network, which makes separate predictions for the probabilities of at least one amyloid plaque, NFT, or neuritic plaque appearing within the patch. The predicted probability for each patch in the WSI is mapped to colors imitating the real IHC, which are then combined into a synthetic image for each of the targets, which can be used to identify whether they are present, along with their locations. Training in silico IHC consists of three main steps. First, we register the serial sections of each sample in the dataset so that each IHC WSI is aligned with the corresponding H&E-LFB image as much as possible. Second, we use the IHC WSIs to identify the patches containing NFTs, Aβ plaques, and neuritic plaques (examples in ). Third, we train a neural network to directly predict the presence of NFTs, Aβ plaques, and neuritic plaques from the H&E-LFB image patches (see the for more details). To register the serial sections, we use the Oriented FAST and Rotated BRIEF feature detector ( ) to identify key points in each slide. Matching key points are selected using the RANSAC algorithm ( ). The matched key points are then used to overlay the serial sections over each other. To assess the accuracy of the registration, we identified 50 pairs of blood vessels visible in both the H&E and one of the IHC slides. The registration process resulted in these blood vessels being mapped to an average of 224 pixels apart, with a SD of 156 pixels (56 ± 39 μm). The average registration error is approximately 10% the width of a patch, so pairs of H&E-LFB and IHC patches are closely related. Next, to identify NFTs, Aβ plaques, and neuritic plaques, we annotated 500 phospho-tau and 500 Aβ IHC-stained patches for the presence of each. Using the annotated dataset, we train one annotator network to identify NFTs and neuritic plaques from the phospho-tau IHC slides and a second annotator network to identify Aβ plaques from the Aβ IHC slides. The annotator networks are then used to label each patch on the real IHC with the hallmark changes that are present. Finally, we combine the registration and IHC quantification to train an end-to-end model for predicting the presence of NFTs, Aβ plaques, and neuritic plaques directly from the H&E-LFB image. To train this model, we use the registered slides to identify paired H&E-LFB patches and IHC patches. The annotator network uses the IHC slides to provide annotations of NFTs, Aβ plaques, and neuritic plaques, which are used as supervision to train in silico IHC. Evaluation of in silico IHC We run in silico IHC on a held-out test set of 83 samples (examples in and ) from 19 patients to evaluate its ability to identify regions with NFTs, neuritic plaques, and Aβ plaques. We find that in silico IHC archives AUROCs of 0.91 (95% CI, 0.88–0.95), 0.92 (95% CI, 0.87–0.94), and 0.88 (95% CI, 0.82–0.93), respectively. As additional verification that our results are not skewed by potential errors in the automated identification of lesions from IHC, we annotated the IHC patches corresponding to 250 random H&E patches in our test set as ground truth. When evaluated on these patches, in silico IHC achieves AUROCs of 0.92 (95% CI, 0.87–0.96), 0.90 (95% CI, 0.84–0.95), and 0.92 (95% CI, 0.84–0.97), respectively. In the H&E-LFB-stained slides, there can be artifacts and other structures unrelated to AD disease, which could potentially be falsely identified as lesions. For example, there are commonly folds and tears in the tissue, and some regions of the brain contain pigmented regions such as neuromelanin and lipofuscin, which may seem to be abnormal. However, we find that in silico IHC can correctly identify that these regions are negative ( ). To further assess the reliability of our algorithm, we collected H&E-LFB and IHC images from 30 consecutive patients with neurodegenerative diseases. These images were taken at different times from the data used in the algorithm's training and thus constitute a separate test set. We deployed the algorithm without any modification to this new test data. The in silico IHC matches well with the experimentally obtained IHC, achieving AUROCs of 0.94 (95% CI, 0.92–0.95), 0.93 (95% CI, 0.91–0.94), and 0.83 (95% CI, 0.80–0.86) for NFT, neuritic plaques, and Aβ plaques, respectively ( ). The prevalence of neuropathological hallmark changes varies highly between different areas of the brain. For example, NFT, neuritic plaques and Aβ plaques are significantly more common in the hippocampus than the midbrain, and these lesions almost never appear in white matter. To evaluate the ability of in silico IHC to extract information beyond standard location features, we trained a logistic regression model to predict the appearance of hallmark changes using the region of the brain, the fraction of the patch stained with LFB (which identifies white matter), and the number of nuclei in the patch as features. This model achieves AUROCs of 0.78 (95% CI, 0.75–0.80) for NFTs, 0.80 (95% CI, 0.77–0.82) for neuritic plaques, and 0.78 (95% CI, 0.76–0.80) for Aβ plaques and is significantly outperformed by in silico IHC. This suggests that the computer vision algorithm can leverage more fine-grained morphological features of the tissue neighborhoods in its assessment. Additionally, we study the choice of neural network architecture used by in silico IHC by comparing against the performance of AlexNet ( ), VGG-11 ( ), and ResNet-18 ( ). All models were trained using the same data. We find that in silico IHC can outperform the other choices of neural network architectures ( ). Interpretation of in silico IHC predictions To better understand the predictions made by our model, we use deep learning interpretation methods to provide attributions. We used the integrated gradients method ( ), which identifies pixels in an image that the model considers most useful for making a prediction. In , we show several examples of attributions for neuritic and Aβ plaques. We additionally show the corresponding IHC stain and find that the attributions match the plaques in location and size, suggesting that in silico IHC has learned to identify individual lesions, despite only being trained on patch-level labels.
Our dataset consists of autopsied brains from 160 patients. Each brain is divided into several regions of interest, resulting in 704 samples collected from multiple regions of brain ( A). The tissue sample from each region is prepared into a formalin-fixed paraffin-embedded block. These blocks are cut into serial sections of five um thickness, with one slide prepared with H&E-LFB staining, along with at least one of pathologic tau and Aβ staining. We split the dataset into a training set consisting of 91 patients, a validation of 20 patients, and a testing set of 19 patients ( B). As an additional test, data for 30 consecutive patients were collected at a different time with all sections necessary for a full analysis of the level of neuropathologic change in each brain ( C).
staining deep learning model In silico IHC uses a trained neural network that takes an H&E-LFB-stained WSI as input and generates synthetic IHC-stained images predicting the presence of NFTs, Aβ plaques, and neuritic plaques ( ). From the WSI, we select non-overlapping patches of 2,048 × 2,048 pixels, corresponding with 517 μm × 517 μm. Each patch is given to the trained neural network, which makes separate predictions for the probabilities of at least one amyloid plaque, NFT, or neuritic plaque appearing within the patch. The predicted probability for each patch in the WSI is mapped to colors imitating the real IHC, which are then combined into a synthetic image for each of the targets, which can be used to identify whether they are present, along with their locations. Training in silico IHC consists of three main steps. First, we register the serial sections of each sample in the dataset so that each IHC WSI is aligned with the corresponding H&E-LFB image as much as possible. Second, we use the IHC WSIs to identify the patches containing NFTs, Aβ plaques, and neuritic plaques (examples in ). Third, we train a neural network to directly predict the presence of NFTs, Aβ plaques, and neuritic plaques from the H&E-LFB image patches (see the for more details). To register the serial sections, we use the Oriented FAST and Rotated BRIEF feature detector ( ) to identify key points in each slide. Matching key points are selected using the RANSAC algorithm ( ). The matched key points are then used to overlay the serial sections over each other. To assess the accuracy of the registration, we identified 50 pairs of blood vessels visible in both the H&E and one of the IHC slides. The registration process resulted in these blood vessels being mapped to an average of 224 pixels apart, with a SD of 156 pixels (56 ± 39 μm). The average registration error is approximately 10% the width of a patch, so pairs of H&E-LFB and IHC patches are closely related. Next, to identify NFTs, Aβ plaques, and neuritic plaques, we annotated 500 phospho-tau and 500 Aβ IHC-stained patches for the presence of each. Using the annotated dataset, we train one annotator network to identify NFTs and neuritic plaques from the phospho-tau IHC slides and a second annotator network to identify Aβ plaques from the Aβ IHC slides. The annotator networks are then used to label each patch on the real IHC with the hallmark changes that are present. Finally, we combine the registration and IHC quantification to train an end-to-end model for predicting the presence of NFTs, Aβ plaques, and neuritic plaques directly from the H&E-LFB image. To train this model, we use the registered slides to identify paired H&E-LFB patches and IHC patches. The annotator network uses the IHC slides to provide annotations of NFTs, Aβ plaques, and neuritic plaques, which are used as supervision to train in silico IHC.
in silico IHC We run in silico IHC on a held-out test set of 83 samples (examples in and ) from 19 patients to evaluate its ability to identify regions with NFTs, neuritic plaques, and Aβ plaques. We find that in silico IHC archives AUROCs of 0.91 (95% CI, 0.88–0.95), 0.92 (95% CI, 0.87–0.94), and 0.88 (95% CI, 0.82–0.93), respectively. As additional verification that our results are not skewed by potential errors in the automated identification of lesions from IHC, we annotated the IHC patches corresponding to 250 random H&E patches in our test set as ground truth. When evaluated on these patches, in silico IHC achieves AUROCs of 0.92 (95% CI, 0.87–0.96), 0.90 (95% CI, 0.84–0.95), and 0.92 (95% CI, 0.84–0.97), respectively. In the H&E-LFB-stained slides, there can be artifacts and other structures unrelated to AD disease, which could potentially be falsely identified as lesions. For example, there are commonly folds and tears in the tissue, and some regions of the brain contain pigmented regions such as neuromelanin and lipofuscin, which may seem to be abnormal. However, we find that in silico IHC can correctly identify that these regions are negative ( ). To further assess the reliability of our algorithm, we collected H&E-LFB and IHC images from 30 consecutive patients with neurodegenerative diseases. These images were taken at different times from the data used in the algorithm's training and thus constitute a separate test set. We deployed the algorithm without any modification to this new test data. The in silico IHC matches well with the experimentally obtained IHC, achieving AUROCs of 0.94 (95% CI, 0.92–0.95), 0.93 (95% CI, 0.91–0.94), and 0.83 (95% CI, 0.80–0.86) for NFT, neuritic plaques, and Aβ plaques, respectively ( ). The prevalence of neuropathological hallmark changes varies highly between different areas of the brain. For example, NFT, neuritic plaques and Aβ plaques are significantly more common in the hippocampus than the midbrain, and these lesions almost never appear in white matter. To evaluate the ability of in silico IHC to extract information beyond standard location features, we trained a logistic regression model to predict the appearance of hallmark changes using the region of the brain, the fraction of the patch stained with LFB (which identifies white matter), and the number of nuclei in the patch as features. This model achieves AUROCs of 0.78 (95% CI, 0.75–0.80) for NFTs, 0.80 (95% CI, 0.77–0.82) for neuritic plaques, and 0.78 (95% CI, 0.76–0.80) for Aβ plaques and is significantly outperformed by in silico IHC. This suggests that the computer vision algorithm can leverage more fine-grained morphological features of the tissue neighborhoods in its assessment. Additionally, we study the choice of neural network architecture used by in silico IHC by comparing against the performance of AlexNet ( ), VGG-11 ( ), and ResNet-18 ( ). All models were trained using the same data. We find that in silico IHC can outperform the other choices of neural network architectures ( ).
in silico IHC predictions To better understand the predictions made by our model, we use deep learning interpretation methods to provide attributions. We used the integrated gradients method ( ), which identifies pixels in an image that the model considers most useful for making a prediction. In , we show several examples of attributions for neuritic and Aβ plaques. We additionally show the corresponding IHC stain and find that the attributions match the plaques in location and size, suggesting that in silico IHC has learned to identify individual lesions, despite only being trained on patch-level labels.
In this work, we introduce a model for translating from standard H&E-LFB-stained neuropathological samples to synthetic phospho-tau and Aβ-stained images. Our model achieves high accuracy for classifying the presence of NFTs, neuritic plaques and Aβ plaques on independent test samples. Moreover, it significantly outperforms models using hand-crafted features based on information about the nuclei and region of the brain. We additionally use interpretation methods to identify regions that the model considered most relevant for making a classification and found that the regions closely match the areas identified by the real IHC, suggesting that the model has learned fine-grained morphological features of cellular neighborhoods that are indicative of the AD related plaques and tangles. A complete analysis of a brain sample for neurodegenerative disease requires IHC-stained slides to be prepared for many regions of the brain, greatly increasing the cost and time needed for preparing the samples, limiting diagnostic workup outside of research settings, and severely limiting large-scale genomic-endophenotype association studies ( ; ). As a result, we focused on the most common neurodegenerative disease as a case study using in silico IHC for translating from routinely and cost-efficiently prepared H&E-LFB-stained slides to the necessary IHC stains. In other tissues and diseases where immunostaining is used, paired H&E-LFB and IHC samples can be used to train deep learning models, decreasing the need for gathering expert annotations, allowing large datasets with fine-grained labels to be created. The methodology we propose for developing in silico IHC can be extended to these diseases and can aid in advancing future work in translation to other areas of medical imaging. An exciting potential application of in silico IHC is to help pathologists quickly assess samples and prioritize samples for experimental immunostaining in both diagnostic and large cohort research settings. In silico staining also makes it easier for pathologists to visualize different biological information on the same tissue section compared to the typical setting where one must mentally align stains taken on different sections. Limitations of the study Our work is a proof-of-concept study that demonstrates the possibility of in silico IHC. More work is needed to harden this technique into a software that can be readily used in laboratories. It is also necessary to ensure variability in the histochemical slide preparation from other sources does not degrade the performance of in silico IHC. We note that in silico IHC is only able to identify pathologic changes that have a discernible disturbance in the H&E-LFB slides, and performance on pathologic change with less disruption may be more challenging to identify. For example, Aβ plaques result in less disruption than both NFTs and neuritic plaques, which may explain in silico IHC's weaker performance on Aβ plaques. It may also be more challenging to identify pretangles, which are precursors to NFTs, owing to their limited disruption on the H&E-LFB slides.
Our work is a proof-of-concept study that demonstrates the possibility of in silico IHC. More work is needed to harden this technique into a software that can be readily used in laboratories. It is also necessary to ensure variability in the histochemical slide preparation from other sources does not degrade the performance of in silico IHC. We note that in silico IHC is only able to identify pathologic changes that have a discernible disturbance in the H&E-LFB slides, and performance on pathologic change with less disruption may be more challenging to identify. For example, Aβ plaques result in less disruption than both NFTs and neuritic plaques, which may explain in silico IHC's weaker performance on Aβ plaques. It may also be more challenging to identify pretangles, which are precursors to NFTs, owing to their limited disruption on the H&E-LFB slides.
Key resources table Resource availability Lead contact Further information and requests for resources should be directed to and will be fulfilled by the lead contact, James Zou ( [email protected] ). Materials availability This study did not generate new unique reagents. Method details Data curation The training cohort comprised 160 cases drawn from the 90 + Study ( ). Tissues were sampled and analyzed as previously described ( ; ). Regions analyzed for the training cohort included the substantia nigra at the level of the red nucleus, middle frontal gyrus or Brodmann area (BA)9, hippocampus at the level of the lateral geniculate nucleus, and amygdala. The additional validation cohort comprised 30 consecutive cases from the 90 + Study. For this validation cohort, each case consists of samples from the primary visual cortex (BA 17), substantia nigra at the level of the red nucleus, inferior parietal lobule (BA39), striatum at the level of the anterior commissure (caudate nucleus and putamen), and hippocampus at the level of the lateral geniculate nucleus. For evaluating AD neuropathologic change by consensus guidelines, Aβ staining on the inferior parietal lobule and middle frontal gyrus are the same score, Aβ staining on the striatum and amygdala are the same score, and phospho-tau staining on the primary visual cortex and middle frontal gyrus are the same score. Tissues were stained histochemically with H%E-LFB, and immunohistochemically with antibodies to Aβ (4G8, Biolegend, cat#800701, working dilution 1:1,000) or phospho-Tau (AT8, ThermoScientific, cat#MN1020, working dilution 1:1,000). All slides are digitized at 40× magnification on a Leica AT2 scanner. Digital staining procedure In silico -IHC takes an H&E-LFB-stained WSI as input ( A). WSIs are typically around 100,000 × 100,000 pixels (>1 GB), which are too large to process with a neural network. To handle this large size, in silico -IHC first divides the WSI into 2,048 × 2,048 patches (517 μm × 517 μm). Each patch is then passed through our trained neural network, resulting in a separate prediction for the probabilities that the patch contains an Aβ plaque, NFT or neuritic plaque. The predictions for the patches are then merged as a synthetic IHC stained image by representing each patch with a colored spot based on the probability of containing each lesion. Registration of H&E-LFB and IHC slides To train our system, we use serial sections of tissue stained with H&E-LFB, phospho-tau IHC, and Aβ IHC ( B). The samples are collected from five different regions: amygdala, hippocampus, contralateral hippocampus, midbrain, and BA9. The sections from a sample are closely related owing to their serial nature (5 μm between sections), but cutting the sample results in the exact spatial relationship between the sections being destroyed: slight distortions in the tissue will result from the cutting process, and the images will be translated and rotated owing to the sections not being in the same position on the slide when digitizing ( ). The first step in our training procedure is then to register the IHC slides to the H&E-LFB slides to provide paired examples for the neural network. To avoid this issue, we begin by using Otsu's method to threshold the H&E-LFB and IHC slides into foreground and background ( ). After binarizing the images, the different color schemes of the stains are no longer an issue, but the main features of the sample are still visible. Next, we identify candidate key points using the ORB feature detector. The ORB feature detector identifies areas of the image with distinctive structures (e.g., sharp corners of the tissue) as candidate key points. For each key point, the ORB feature detector provides a descriptor, a real-valued vector, that allows the key point to be matched across images. With the candidate key points and descriptors for a pair of H&E-LFB and IHC slides, we create a matching between the key points from the two slides with the most similar descriptors. This process results in many correct matches, but will also include incorrect matches that must be filtered before finding a transform between the images. We use the RANSAC method for identifying the outliers from the matches, and we fit an affine transform on the remaining matches to register the images ( ). We run RANSAC for 2,000 iterations and consider a key point as an inlier if the error is within 25 pixels. Identification of lesions from IHC The next step in generating the expected output for training the neural network is identifying the hallmark lesions from the IHC images. The IHCs can have considerable variation in the background color along with other unrelated structures such as folds in the tissue, lipofuscin, and neuromelanin, which must be distinguished from the NFTs and Aβ plaques. To handle these issues, we selected 500 Aβ and 500 phospho-tau IHC regions of 16,384 × 16,384 pixels (4,161 μm × 4,161 μm) from the WSIs in the training set. The Aβ patches were annotated for instances of Aβ plaques, and the phospho-tau patches were annotated for instances of NFTs and neuritic plaques. The regions were then divided into 2,048 × 2,048-pixel patches, which were considered positive for each lesion if any instance of the lesion appeared within the patch. With these annotated patches, we trained two separate DenseNet121 models ( ) to identify Aβ plaques from the Aβ IHC slides and identify NFTs and neuritic plaques from the phospho-tau IHC slides using 2,048 × 2,048-pixel patches extracted from the larger patches. Our models were implemented using the PyTorch library ( ). Our trained model achieved AUCs of 0.98 for NFTs, 0.99 for neuritic plaques, and 0.97 for Aβ plaques ( ). Training and evaluating the H&E-LFB model With the combined results of registration and identification from IHC, we have patch-level annotations for the H&E-LFB slides. For each H&E-LFB patch, we identify the corresponding IHC patch and run our trained IHC model on the patch to identify if Aβ plaques, NFTs, or neuritic plaques are present. From our training set, we extract 190,992 patches, and train a Densenet121 model to predict both the presence of Aβ plaques, NFTs, and neuritic plaques. We evaluate the performance of the model on the patches from the held-out test patients corresponding to IHC patches that were confidently classified for each hallmark change (<5% or >95%). We find that the model's predicted probabilities of the hallmark changes are closely aligned to the true fraction of positive patches ( ). Model architecture and training For generating predictions with in silico -IHC, we use a DenseNet121 architecture ( ), which has previously been shown to perform well on the ImageNet dataset ( ; ). The DenseNet121 architecture consists of 120 convolutional layers arranged into 4 densely connected blocks, followed by a fully connected layer. We initialize the model with pretrained ImageNet weights, and we fine-tune all parameters in the model for 150 epochs. We use a stochastic gradient descent optimizer with an initial learning rate of 1 × 10 −4 and a momentum of 0.9, and we decay the learning rate by a factor of 10 every 50 epochs. The optimizer trains the model by minimizing the binary cross-entropy loss between the model's predictions and the label for each hallmark change extracted from the matched IHC patch. During training, we augment the dataset by including all rotations and reflections of the patches. For our final evaluation, we select the model from the epoch with the highest AUC on the validation set. Interpretation of in silico -IHC predictions To interpret the predictions made by in silico -IHC, we first use the integrated gradients method to provide per-pixel attributions for each patch. We then sought to identify regions, rather than pixels, that resulted in positive predictions. First, we applied a Gaussian filter to the attributions with a standard deviation of 5 for the Gaussian kernel; this allows nearby regions with high attributions to be connected. Next, we identified pixels with attributions above the 90th percentile, and extracted connected regions of pixels. Regions smaller than 500 pixels (32 μm 2 ) were then filtered out. Finally, the contours of the remaining regions were then extracted. Quantification and statistical analysis CIs for the AUROCs in the results were computed using 10,000 bootstrapped samples and obtaining 95 percentile ranges for each prediction. The performance of in silico -IHC for NFT and neuritic plaque predictions were computed for 51 samples with phospho-tau staining in the test set, and the performance for amyloid plaque predictions were computed for 53 samples with Aβ staining in the test set. The performance of in silico -IHC on the additional evaluation data was computed for 60 samples with phospho-tau staining and 90 samples with Aβ staining. The performance of the identification of lesions from IHC was computed using 75 labeled patches in the test set. Additional analysis details are provided in the Results section and in figure legends. Statistical analysis was performed using Python.
Lead contact Further information and requests for resources should be directed to and will be fulfilled by the lead contact, James Zou ( [email protected] ). Materials availability This study did not generate new unique reagents.
Further information and requests for resources should be directed to and will be fulfilled by the lead contact, James Zou ( [email protected] ).
This study did not generate new unique reagents.
Data curation The training cohort comprised 160 cases drawn from the 90 + Study ( ). Tissues were sampled and analyzed as previously described ( ; ). Regions analyzed for the training cohort included the substantia nigra at the level of the red nucleus, middle frontal gyrus or Brodmann area (BA)9, hippocampus at the level of the lateral geniculate nucleus, and amygdala. The additional validation cohort comprised 30 consecutive cases from the 90 + Study. For this validation cohort, each case consists of samples from the primary visual cortex (BA 17), substantia nigra at the level of the red nucleus, inferior parietal lobule (BA39), striatum at the level of the anterior commissure (caudate nucleus and putamen), and hippocampus at the level of the lateral geniculate nucleus. For evaluating AD neuropathologic change by consensus guidelines, Aβ staining on the inferior parietal lobule and middle frontal gyrus are the same score, Aβ staining on the striatum and amygdala are the same score, and phospho-tau staining on the primary visual cortex and middle frontal gyrus are the same score. Tissues were stained histochemically with H%E-LFB, and immunohistochemically with antibodies to Aβ (4G8, Biolegend, cat#800701, working dilution 1:1,000) or phospho-Tau (AT8, ThermoScientific, cat#MN1020, working dilution 1:1,000). All slides are digitized at 40× magnification on a Leica AT2 scanner. Digital staining procedure In silico -IHC takes an H&E-LFB-stained WSI as input ( A). WSIs are typically around 100,000 × 100,000 pixels (>1 GB), which are too large to process with a neural network. To handle this large size, in silico -IHC first divides the WSI into 2,048 × 2,048 patches (517 μm × 517 μm). Each patch is then passed through our trained neural network, resulting in a separate prediction for the probabilities that the patch contains an Aβ plaque, NFT or neuritic plaque. The predictions for the patches are then merged as a synthetic IHC stained image by representing each patch with a colored spot based on the probability of containing each lesion. Registration of H&E-LFB and IHC slides To train our system, we use serial sections of tissue stained with H&E-LFB, phospho-tau IHC, and Aβ IHC ( B). The samples are collected from five different regions: amygdala, hippocampus, contralateral hippocampus, midbrain, and BA9. The sections from a sample are closely related owing to their serial nature (5 μm between sections), but cutting the sample results in the exact spatial relationship between the sections being destroyed: slight distortions in the tissue will result from the cutting process, and the images will be translated and rotated owing to the sections not being in the same position on the slide when digitizing ( ). The first step in our training procedure is then to register the IHC slides to the H&E-LFB slides to provide paired examples for the neural network. To avoid this issue, we begin by using Otsu's method to threshold the H&E-LFB and IHC slides into foreground and background ( ). After binarizing the images, the different color schemes of the stains are no longer an issue, but the main features of the sample are still visible. Next, we identify candidate key points using the ORB feature detector. The ORB feature detector identifies areas of the image with distinctive structures (e.g., sharp corners of the tissue) as candidate key points. For each key point, the ORB feature detector provides a descriptor, a real-valued vector, that allows the key point to be matched across images. With the candidate key points and descriptors for a pair of H&E-LFB and IHC slides, we create a matching between the key points from the two slides with the most similar descriptors. This process results in many correct matches, but will also include incorrect matches that must be filtered before finding a transform between the images. We use the RANSAC method for identifying the outliers from the matches, and we fit an affine transform on the remaining matches to register the images ( ). We run RANSAC for 2,000 iterations and consider a key point as an inlier if the error is within 25 pixels. Identification of lesions from IHC The next step in generating the expected output for training the neural network is identifying the hallmark lesions from the IHC images. The IHCs can have considerable variation in the background color along with other unrelated structures such as folds in the tissue, lipofuscin, and neuromelanin, which must be distinguished from the NFTs and Aβ plaques. To handle these issues, we selected 500 Aβ and 500 phospho-tau IHC regions of 16,384 × 16,384 pixels (4,161 μm × 4,161 μm) from the WSIs in the training set. The Aβ patches were annotated for instances of Aβ plaques, and the phospho-tau patches were annotated for instances of NFTs and neuritic plaques. The regions were then divided into 2,048 × 2,048-pixel patches, which were considered positive for each lesion if any instance of the lesion appeared within the patch. With these annotated patches, we trained two separate DenseNet121 models ( ) to identify Aβ plaques from the Aβ IHC slides and identify NFTs and neuritic plaques from the phospho-tau IHC slides using 2,048 × 2,048-pixel patches extracted from the larger patches. Our models were implemented using the PyTorch library ( ). Our trained model achieved AUCs of 0.98 for NFTs, 0.99 for neuritic plaques, and 0.97 for Aβ plaques ( ). Training and evaluating the H&E-LFB model With the combined results of registration and identification from IHC, we have patch-level annotations for the H&E-LFB slides. For each H&E-LFB patch, we identify the corresponding IHC patch and run our trained IHC model on the patch to identify if Aβ plaques, NFTs, or neuritic plaques are present. From our training set, we extract 190,992 patches, and train a Densenet121 model to predict both the presence of Aβ plaques, NFTs, and neuritic plaques. We evaluate the performance of the model on the patches from the held-out test patients corresponding to IHC patches that were confidently classified for each hallmark change (<5% or >95%). We find that the model's predicted probabilities of the hallmark changes are closely aligned to the true fraction of positive patches ( ). Model architecture and training For generating predictions with in silico -IHC, we use a DenseNet121 architecture ( ), which has previously been shown to perform well on the ImageNet dataset ( ; ). The DenseNet121 architecture consists of 120 convolutional layers arranged into 4 densely connected blocks, followed by a fully connected layer. We initialize the model with pretrained ImageNet weights, and we fine-tune all parameters in the model for 150 epochs. We use a stochastic gradient descent optimizer with an initial learning rate of 1 × 10 −4 and a momentum of 0.9, and we decay the learning rate by a factor of 10 every 50 epochs. The optimizer trains the model by minimizing the binary cross-entropy loss between the model's predictions and the label for each hallmark change extracted from the matched IHC patch. During training, we augment the dataset by including all rotations and reflections of the patches. For our final evaluation, we select the model from the epoch with the highest AUC on the validation set. Interpretation of in silico -IHC predictions To interpret the predictions made by in silico -IHC, we first use the integrated gradients method to provide per-pixel attributions for each patch. We then sought to identify regions, rather than pixels, that resulted in positive predictions. First, we applied a Gaussian filter to the attributions with a standard deviation of 5 for the Gaussian kernel; this allows nearby regions with high attributions to be connected. Next, we identified pixels with attributions above the 90th percentile, and extracted connected regions of pixels. Regions smaller than 500 pixels (32 μm 2 ) were then filtered out. Finally, the contours of the remaining regions were then extracted.
The training cohort comprised 160 cases drawn from the 90 + Study ( ). Tissues were sampled and analyzed as previously described ( ; ). Regions analyzed for the training cohort included the substantia nigra at the level of the red nucleus, middle frontal gyrus or Brodmann area (BA)9, hippocampus at the level of the lateral geniculate nucleus, and amygdala. The additional validation cohort comprised 30 consecutive cases from the 90 + Study. For this validation cohort, each case consists of samples from the primary visual cortex (BA 17), substantia nigra at the level of the red nucleus, inferior parietal lobule (BA39), striatum at the level of the anterior commissure (caudate nucleus and putamen), and hippocampus at the level of the lateral geniculate nucleus. For evaluating AD neuropathologic change by consensus guidelines, Aβ staining on the inferior parietal lobule and middle frontal gyrus are the same score, Aβ staining on the striatum and amygdala are the same score, and phospho-tau staining on the primary visual cortex and middle frontal gyrus are the same score. Tissues were stained histochemically with H%E-LFB, and immunohistochemically with antibodies to Aβ (4G8, Biolegend, cat#800701, working dilution 1:1,000) or phospho-Tau (AT8, ThermoScientific, cat#MN1020, working dilution 1:1,000). All slides are digitized at 40× magnification on a Leica AT2 scanner.
In silico -IHC takes an H&E-LFB-stained WSI as input ( A). WSIs are typically around 100,000 × 100,000 pixels (>1 GB), which are too large to process with a neural network. To handle this large size, in silico -IHC first divides the WSI into 2,048 × 2,048 patches (517 μm × 517 μm). Each patch is then passed through our trained neural network, resulting in a separate prediction for the probabilities that the patch contains an Aβ plaque, NFT or neuritic plaque. The predictions for the patches are then merged as a synthetic IHC stained image by representing each patch with a colored spot based on the probability of containing each lesion.
To train our system, we use serial sections of tissue stained with H&E-LFB, phospho-tau IHC, and Aβ IHC ( B). The samples are collected from five different regions: amygdala, hippocampus, contralateral hippocampus, midbrain, and BA9. The sections from a sample are closely related owing to their serial nature (5 μm between sections), but cutting the sample results in the exact spatial relationship between the sections being destroyed: slight distortions in the tissue will result from the cutting process, and the images will be translated and rotated owing to the sections not being in the same position on the slide when digitizing ( ). The first step in our training procedure is then to register the IHC slides to the H&E-LFB slides to provide paired examples for the neural network. To avoid this issue, we begin by using Otsu's method to threshold the H&E-LFB and IHC slides into foreground and background ( ). After binarizing the images, the different color schemes of the stains are no longer an issue, but the main features of the sample are still visible. Next, we identify candidate key points using the ORB feature detector. The ORB feature detector identifies areas of the image with distinctive structures (e.g., sharp corners of the tissue) as candidate key points. For each key point, the ORB feature detector provides a descriptor, a real-valued vector, that allows the key point to be matched across images. With the candidate key points and descriptors for a pair of H&E-LFB and IHC slides, we create a matching between the key points from the two slides with the most similar descriptors. This process results in many correct matches, but will also include incorrect matches that must be filtered before finding a transform between the images. We use the RANSAC method for identifying the outliers from the matches, and we fit an affine transform on the remaining matches to register the images ( ). We run RANSAC for 2,000 iterations and consider a key point as an inlier if the error is within 25 pixels.
The next step in generating the expected output for training the neural network is identifying the hallmark lesions from the IHC images. The IHCs can have considerable variation in the background color along with other unrelated structures such as folds in the tissue, lipofuscin, and neuromelanin, which must be distinguished from the NFTs and Aβ plaques. To handle these issues, we selected 500 Aβ and 500 phospho-tau IHC regions of 16,384 × 16,384 pixels (4,161 μm × 4,161 μm) from the WSIs in the training set. The Aβ patches were annotated for instances of Aβ plaques, and the phospho-tau patches were annotated for instances of NFTs and neuritic plaques. The regions were then divided into 2,048 × 2,048-pixel patches, which were considered positive for each lesion if any instance of the lesion appeared within the patch. With these annotated patches, we trained two separate DenseNet121 models ( ) to identify Aβ plaques from the Aβ IHC slides and identify NFTs and neuritic plaques from the phospho-tau IHC slides using 2,048 × 2,048-pixel patches extracted from the larger patches. Our models were implemented using the PyTorch library ( ). Our trained model achieved AUCs of 0.98 for NFTs, 0.99 for neuritic plaques, and 0.97 for Aβ plaques ( ).
With the combined results of registration and identification from IHC, we have patch-level annotations for the H&E-LFB slides. For each H&E-LFB patch, we identify the corresponding IHC patch and run our trained IHC model on the patch to identify if Aβ plaques, NFTs, or neuritic plaques are present. From our training set, we extract 190,992 patches, and train a Densenet121 model to predict both the presence of Aβ plaques, NFTs, and neuritic plaques. We evaluate the performance of the model on the patches from the held-out test patients corresponding to IHC patches that were confidently classified for each hallmark change (<5% or >95%). We find that the model's predicted probabilities of the hallmark changes are closely aligned to the true fraction of positive patches ( ).
For generating predictions with in silico -IHC, we use a DenseNet121 architecture ( ), which has previously been shown to perform well on the ImageNet dataset ( ; ). The DenseNet121 architecture consists of 120 convolutional layers arranged into 4 densely connected blocks, followed by a fully connected layer. We initialize the model with pretrained ImageNet weights, and we fine-tune all parameters in the model for 150 epochs. We use a stochastic gradient descent optimizer with an initial learning rate of 1 × 10 −4 and a momentum of 0.9, and we decay the learning rate by a factor of 10 every 50 epochs. The optimizer trains the model by minimizing the binary cross-entropy loss between the model's predictions and the label for each hallmark change extracted from the matched IHC patch. During training, we augment the dataset by including all rotations and reflections of the patches. For our final evaluation, we select the model from the epoch with the highest AUC on the validation set.
in silico -IHC predictions To interpret the predictions made by in silico -IHC, we first use the integrated gradients method to provide per-pixel attributions for each patch. We then sought to identify regions, rather than pixels, that resulted in positive predictions. First, we applied a Gaussian filter to the attributions with a standard deviation of 5 for the Gaussian kernel; this allows nearby regions with high attributions to be connected. Next, we identified pixels with attributions above the 90th percentile, and extracted connected regions of pixels. Regions smaller than 500 pixels (32 μm 2 ) were then filtered out. Finally, the contours of the remaining regions were then extracted.
CIs for the AUROCs in the results were computed using 10,000 bootstrapped samples and obtaining 95 percentile ranges for each prediction. The performance of in silico -IHC for NFT and neuritic plaque predictions were computed for 51 samples with phospho-tau staining in the test set, and the performance for amyloid plaque predictions were computed for 53 samples with Aβ staining in the test set. The performance of in silico -IHC on the additional evaluation data was computed for 60 samples with phospho-tau staining and 90 samples with Aβ staining. The performance of the identification of lesions from IHC was computed using 75 labeled patches in the test set. Additional analysis details are provided in the Results section and in figure legends. Statistical analysis was performed using Python.
|
The effect on dental care utilization from transitioning pediatric Medicaid beneficiaries to managed care | 1bacc233-e56d-4f20-9532-9bc38e160487 | 9314593 | Dental[mh] | INTRODUCTION Historically, states have used fee‐for‐service (FFS) delivery models for their Medicaid programs. Under FFS, providers are reimbursed for each billable service, providing no incentive for providers to contain cost, use the most cost‐effective treatments, or provide services that are not reimbursed. These ever‐increasing health care costs put a strain on public finances, particularly at the state level where Medicaid accounts for about 29% of total state spending (Rudowitz et al., ). In recent years, states have attempted to control Medicaid costs by switching to private managed care organizations (MCOs) to deliver Medicaid services, with over four out of every five Medicaid beneficiaries enrolled in some form of managed care in 2018 (Medicaid.gov, ). As of July 2019, 40 states including the District of Columbia contract with MCOs to provide comprehensive risk‐based health care plans for at least some Medicaid beneficiaries (Hinton et al., ). Advocates for managed care argue that when compared to FFS, private MCOs have greater expertize and resources and are better able to manage the health of Medicaid beneficiaries through pay‐for‐performance incentives. This means MCOs potentially improve health care management, increase provider accountability, and support better monitoring of health care quality while offering states greater control and predictability about future costs (MACPAC, n.d. (a)). At the same time, MCOs may reduce utilization and expenditures by restricting the number of in‐network providers and lowering reimbursement rates paid to those providers. Additionally, some arrangements pay MCOs capitated rates per beneficiary, further incentivizing MCOs to control costs (MACPAC, n.d. (b)). While there is a general move to MCOs to cover traditional health care services, many states continue to cover specialized services such as long‐term care, behavioral health, and dental services under traditional FFS Medicaid. There is little evidence comparing MCOs to traditional FFS plans in Medicaid programs when it comes to these specialized services. There are several studies examining access to and demand for Medicaid dental services after states either increased provider reimbursement or expanded Medicaid dental benefits (Buchmueller et al., ; ; Choi, ; Decker, ). However, these studies did not examine dental care utilization when Medicaid programs transition dental benefits from FFS to managed care. In this paper, we address this gap by focusing on the role of MCOs in delivering dental services. There are a number of studies that examine the role of managed care in Medicaid dental services (Burns, ; Coughlin & Long, ; Marton et al., ; Zuckerman et al., ). However, these studies lacked repeated cross‐sectional data before and after the transition to managed care or lacked a comparison group of Medicaid beneficiaries in states that stay under FFS Medicaid for dental services. It is important for policymakers to focus on pediatric dental services because access to dental care, particularly at a young age, can affect development and productivity in future years. Good oral health in childhood can lead to better labor market outcomes later in life (Glied & Neidell, ). Fortunately, dental care use among children has increased since the early 2000s and is at or near its highest recorded level (American Dental Association, ). This increase in pediatric dental care utilization has been driven primarily by publicly insured children (American Dental Association, ; Crall & Vujicic, ). Also, racial disparities in untreated caries (e.g., cavities) is narrowing among children (American Dental Association, ). Hence, it is important for policymakers to know if private provision of pediatric Medicaid dental services through MCOs could enhance or reverse the progress children have made over the last 20 years with respect to utilization of dental services. That is an open question we hope to answer in this paper. Furthermore, the role MCOs play in providing dental services is not without controversy. For example, Maryland implemented managed care for pediatric dental services in 1997 in an effort to improve dental service quality, but the MCOs did not increase dentist participation or utilization among Medicaid beneficiaries (Thuku et al., ). After the death of a pediatric Medicaid dental patient in 2007, Maryland carved its Medicaid dental program out of managed care in 2009. Reimbursement rates were increased and administrative services were streamlined through a single vendor (Thuku et al., ). In this study, we utilized difference‐in‐differences estimation to measure how dental care utilization among pediatric Medicaid beneficiaries changed in three states (Indiana, Missouri and Nebraska) that transitioned from FFS to managed care between 2016 and 2018 relative to 18 states that maintained universal FFS provision of Medicaid dental services over the same time period. We relied on dental claims data from the Transformed Medicaid Statistical Information System (T‐MSIS). The pediatric population is studied because all states are mandated to cover dental benefits for children under age 21 in Medicaid through the Early and Periodic Screening, Diagnostic and Treatment (EPSDT) benefit. As of 2016, 21 states and the District of Columbia had pediatric Medicaid dental benefits administered by MCOs (Gupta et al., ). In contrast, Medicaid dental coverage for adults is optional, and adult dental benefits vary across states (Medicaid.gov, n.d.). Among the limited number of states that also provided adult dental benefits, five states had their dental program administered by MCOs (Gupta et al., ). Therefore, by studying the pediatric population, we examine the role of MCOs in a specialized service area where dental benefits are comprehensive and there is less heterogeneity across states. Our findings indicate that dental care utilization, measured as visits per 10,000 beneficiaries and share of beneficiaries with a dental claim, declined following adoption of dental managed care, especially in the first few quarters after implementation. Utilization in Indiana and Nebraska also decreased across dental service categories (diagnostic and preventive) and for specific procedures (prophylaxis and fluoridation). There was weaker evidence that utilization for restorative dental services declined significantly in the three states following the transition to dental managed care. The paper is structured as follows: Section provides a conceptual framework for our analysis and a timeline of the managed care reforms in Indiana, Missouri and Nebraska; Section describes the dental claims data; Section provides an overview of our empirical strategy; Section presents the results; and Section concludes the paper, exploring the health policy implications of our findings.
CONCEPTUAL FRAMEWORK AND TIMELINE In an effort to limit health care costs and improve outcomes for patients, public payers such as Medicaid and Medicare have transitioned from the public provision of health care services via traditional FFS models to private provision of services through MCOs. Proponents of MCOs suggest that private provision of public services leads to greater patient outreach, better case management, improved administrative services for providers, and improved health outcomes. These characteristics could increase utilization. At the same time, MCOs can use their size and financial incentives to channel patients to preferred in‐network providers who accept lower prices (Wu, ). Managed care organizations are often reimbursed under risk‐based contracts with capitated arrangements, meaning that MCOs have an incentive to keep expenditures below a certain threshold. This may be one reason why the existing literature finds mixed results in terms of health care outcomes, utilization, expenditures and consumer welfare when states switch from publicly provided FFS to a private managed care organizations (MCO) model (Aizner et al., ; Curto et al., ; Dranove et al., ; Duggan, ; Duggan & Morgan, ; Town & Liu, ). The ultimate effect depends in part on the level of spending and provider reimbursement in the FFS program prior to the transition to managed care (Duggan & Hayford, ). For example, when FFS programs are heavily rationed, private provision of health care services within public programs could lead to better health outcomes; conversely, when FFS programs are more generous, a transition to managed care could lead to worse health outcomes (Layton et al., ). One would expect that the mechanisms through which managed care affects health care utilization to be the same as the mechanisms through which managed care affects specialized care, such as dental services. However, there may be some key differences. In particular, managed care entities handling health care services outside of dental care may use care coordination across various health care services because these health care services could be interlinked or the same service could be provided in multiple settings. For example, physical therapy after a hospitalization could be linked and provided at the patient's home, an outpatient rehab center, or a skilled nursing facility. However, the provision of dental services is often siloed from other health care services and are generally provided in one setting. Hence, one may expect the impact of managed care on dental service utilization to be different from the impact of managed care provision on medical services. Within the context of pediatric dental services, states are mandated to cover basic services, but the relative budget the state allocates to dental services under FFS and after transitioning to managed care may affect the utilization of dental services. Managed care organizations may be able to utilize their expertize to enhance access to dentists. However, MCOs must still operate within the budgets provided to them by the state. Therefore, less generous payments to the MCO will require it to engage in strategies to provide services within the budget they are provided. This means MCOs may negotiate with dentists to pay lower rates, which could ultimately lead to some dentists being excluded from an MCO's network. Furthermore, MCOs may cap basic care (e.g., X‐rays, prophylaxis, sealants) to a certain number of services per year, which could lower utilization among Medicaid beneficiaries. Overall, it is an empirical question whether the transition of dental services from FFS to managed care increases or decreases dental care utilization. In this study, we examined three states that transitioned their Medicaid pediatric dental benefits from FFS to managed care. Indiana, Missouri and Nebraska made this transition at different points in 2017. Table provides a summary of the managed care programs in these three states. Indiana implemented their Medicaid managed care program, Hoosier Health wise (HHW), in 1997 but continued to provide pediatric dental services on a FFS basis (Medicaid.gov, ). On January 1, 2017, Indiana mandated all dental services for children under the age of 19 be administered via HHW (In.gov, n.d.; Medicaid.gov, ). Four MCOs administered pediatric Medicaid dental benefits in Indiana: DentaQuest on behalf of Anthem, CareSource, MDWise, and Managed Health Services (DentaQuest, undated; In.gov, undated). The transition to managed care occurred quickly, with only 9% of pediatric dental claims associated with MCOs in December 2016, rising to over 85% in January 2017, and then to 90%–96% thereafter (Figure ). Missouri staggered its implementation of managed care by region. Medicaid beneficiaries in eastern and western Missouri (e.g., St. Louis and Kansas City regions) began to be covered by MCOs in the mid‐1990s. Southwest Missouri and portions of Central Missouri transitioned their pediatric Medicaid populations to MCOs on May 1, 2017 (Missouri Health Net, ). Between January 2016 and April 2017, the percentage of total pediatric Medicaid claims in Missouri paid by MCOs ranged from 48% to 65%. After Missouri implemented managed care in the rest of the state, the percentage of pediatric dental claims paid by MCOs rose to over 96% (Figure ). Missouri Medicaid beneficiaries under age 21 were mandated to enroll in managed care except those eligible for SSI, disabled children, children with special health needs or those in foster care (Missouri Health Net, ). As of 2017, Home State Health, Missouri Care, and United Healthcare administer the comprehensive MCO in Missouri (Medicaid.gov, ). In this paper, counties in Missouri that transitioned to dental managed care prior to May 2017 were excluded from all analyses except Figure . Effective October 1, 2017, Nebraska transitioned its pediatric Medicaid beneficiaries from FFS to managed care for dental services, contracting with Managed Care of North America (MCNA) to administer pediatric dental services (Medicaid.gov, ). The transition established a dental home program to better coordinate relationships between providers and beneficiaries (Nebraska Department of Health and Human Services, n.d.). Prior to October 2017, less than 1% of total pediatric Medicaid dental claims were paid by managed care. After the transition occurred, over 99% of total dental claims in Nebraska were paid by a MCO (Figure ). In Nebraska, MCNA is required to follow proper quality and accreditation guidelines (Medicaid.gov, ). When transitioning to managed care, Indiana and Missouri utilized a comprehensive contract (Medicaid.gov, , ), while Nebraska used a pre‐paid ambulatory health plan (PAHP) contract for dental services (Medicaid.gov, ). Both comprehensive and PAHP contracts often contain network and quality requirements, require MCOs to facilitate outreach between providers and beneficiaries, and typically pay a risk‐based capitated payment regardless of whether or not beneficiaries receive services (MACPAC, n.d. (b)). MCO plans in both type of contracts would be at risk for losses if they make payments to providers in excess of what the state pays the MCO. On the other hand, if payments to providers are less than what the state pays the MCO, managed care plans would then be allowed to retain the profits as long as the MCO meets medical loss ratio requirements and reinvests any excess funds towards quality improvement (MACPAC, n.d. (b)). Therefore, MCOs in comprehensive and PAHP contracts may have an incentive to reduce utilization of dental services relative to FFS depending on the risk, payment, and provider arrangements used by the state with MCOs. These factors may also effect the relative “size” of the incentive when comparing similar types of contracts across states. When transitioning from FFS to managed care, comprehensive and PAHP contracts could differentially impact utilization. Comprehensive contracts cover a broad range of services (e.g., medical services), in contrast to PAHP contracts which are usually carve‐outs for specialty services (e.g., dental services). While both comprehensive and PAHP contracts put financial risk on the MCOs, comprehensive contracts require the MCO to optimize a single capitated payment and contractual requirements across dental, medical, and other services while dental PAHP contracts have capitated payments and other contractual requirements that are solely tied to providing dental services. Because dental services are lower cost and more predictable than medical services, the more focused PAHP contracts may have less incentive to limit dental care use among Medicaid beneficiaries relative to comprehensive contracts.
DATA Currently, each state plus the District of Columbia submits monthly Medicaid claims and enrollment data to the Centers for Medicare and Medicaid Services (CMS) through the Transformed Medicaid Statistical Information System (T‐MSIS). Claims data include medical (inpatient and outpatient), pharmacy, long‐term care and dental claims. Dental claims are housed in the “other services” claims tables. The enrollment tables include demographic and location (state, county, zip code) characteristics for each beneficiary enrolled in Medicaid or CHIP. The claims and enrollment data available in T‐MSIS cover the universe of Medicaid and CHIP beneficiaries in each state, the District of Columbia and associated territories. In T‐MSIS, the claims tables are split into separate header and line tables. The header tables include identifying information for whether the claim is paid on a FFS basis or by managed care. We used this identifying information to classify dental claims as FFS or managed care. The header and line tables are linkable through a unique claim identifier. From the other services line tables, we extracted all dental procedure codes that begin with the letter “D” (D0100‐D9999). These codes represent Codes on Dental Procedures and Nomenclature (CDT) (American Dental Association, ). While we have comprehensive information about the claims, a key limitation of the T‐MSIS claims data is that all payment information is masked for MCOs as per CMS requirements. Therefore, we examine utilization but not the prices paid or aggregate expenditures for those services. At the county level in calendar years 2016 through 2018, we calculated two measures of utilization on a quarterly basis: the share of beneficiaries ages 0–18 with a dental claim and the number of visits per 10,000 beneficiaries. These are the two main outcome variables we examined in our analysis. In supplementary analysis, we also examined select preventative and restorative services. We examined these procedure category outcome measures as one might expect that the effect of managed care on restorative dental care utilization to be smaller than the effect on other dental service categories. If a child has tooth pain or a cavity, a restorative procedure becomes medically necessary whereas fluoridation, diagnostic and preventive care could be more limited to conform to clinical dental guidelines. Additionally, restorative services typically cost more than preventive services (Meyerhoefer et al., ). The outcome variables examined in the supplementary analysis included: the share of enrolled beneficiaries with a diagnostic claim (D0100‐D0999), share of beneficiaries with a preventive claim (D1000‐D1999), share of beneficiaries with a prophylaxis claim (D1110, D1120), share of beneficiaries with a fluoridation claim (D1206, D1208), and the share beneficiaries with a restorative claim (D2000‐D2999). Prophylaxis (e.g., dental cleaning) and fluoridation procedures are considered two types of preventive services. Access to T‐MSIS Medicaid and CHIP dental claims data is part of a data use agreement approved by CMS (DUA RSCH‐2020‐5563: “The State of Oral Healthcare Use, Quality and Spending: Findings from Medicaid and CHIP Programs”).
EMPIRICAL STRATEGY From 2016 to 2018, the three treatment states transitioned pediatric dental benefits in their Medicaid program from publicly administered FFS plans to privately administered managed care plans. We compared these states to 18 control states where Medicaid programs remained FFS (i.e., the percentage of total pediatric dental claims remained at or near 100% FFS) throughout the study period. This assures we did not include in the control group states that may have gradually shifted towards FFS or managed care over time. We also excluded from the regression analyses Missouri counties that transitioned to managed care prior to 2017. Our final analytic sample includes a balanced panel of 1086 counties where each county is observed for 12 quarters ( T = 12) spanning from 2016 through 2018. Across all time periods, all counties from the three treatment states and 18 control states were pooled together in the main regression analyses. Given the staggered introduction of dental managed care in the three treatment states, estimating a difference‐in‐differences specification via two‐way fixed‐effects (TWFE) could lead to biased policy estimates. As described by Goodman‐Bacon ( ), in a staggered policy intervention setting, the TWFE policy estimate is a weighted average of multiple comparisons: an early‐treated group versus a never‐treated group, a later‐treated group versus a never‐treated group, an early treated group versus a later‐treated group before it is treated, and a later treated group versus an earlier treated group after the early group is treated. It is this last comparison that can cause bias in the TWFE estimator when treatment effects vary over time or treatment cohort. For example, in our case, Indiana was first exposed to the treatment while Nebraska was the last to be exposed to treatment. One portion of the TWFE estimate consists of a 2 × 2 difference‐in‐differences comparison of Nebraska treated observations versus Indiana observations that have already been treated. Since Indiana was already on its treatment path, this particular 2 × 2 comparison could contaminate the composite TWFE estimate. To assess the amount of potential bias in the composite TWFE estimate, we estimated a detailed Goodman‐Bacon ( ) decomposition, the results of which are shown in Figure and Table for the share of beneficiaries with a dental claim. About 3% of the composite TWFE estimate comes from the potentially biased later treated versus earlier treated comparison. Although the timing groups (i.e., earlier treated vs. later treated and later treated vs. earlier treated) have a small overall weight, their average difference‐in‐differences estimate is about two times as large as the difference‐in‐differences estimate where a treated group is compared to the never treated group. This suggests potential heterogeneity in treatment effects across treatment cohorts and time. To mitigate the potential bias in TWFE, Callaway and Sant’Anna ( ) proposed a difference‐in‐differences estimator that accounts for staggered treatment entry and many time periods, as in the case we examine in this paper. Callaway and Sant’Anna ( ) use outcome regression, inverse probability weighting and doubly robust methods to develop their difference‐in‐differences estimate. However, as shown by Wooldridge ( ), there is nothing inherently wrong with TWFE estimation in the presence of staggered entry except that it has been used in an overly restrictive manner when generating difference‐in‐differences policy estimates, which exposes estimates to bias as described by Goodman‐Bacon ( ). To mitigate the bias from comparing later treated groups to earlier treated groups, Wooldridge ( ) proposed a very flexible extended TWFE (ETWFE) regression estimator using interaction terms to account for staggered entry and multiple time periods in a panel data setting. The ETWFE regression estimator estimates separate average treatment effects on the treated (ATT) by treatment cohort and by time. By computing separate ATTs by treatment cohort and time, the ETWFE estimator does not compare later treated units to earlier treated units. The ATTs can then be aggregated to produce cohort specific and overall aggregate treatment effect estimates. This is a very similar approach to that used by Callaway and Sant’Anna ( ) except that standard regression methods are used to get the ATTs used for aggregation. We followed the methodology proposed by Wooldridge ( ) in what follows, though our results are robust to using the method proposed by Callaway and Sant’Anna ( ) and are discussed further in the Robustness Checks section. In our set up, for county c, and periods t = 1,…,12, where t = 1 corresponds to 2016 quarter 1, we defined a vector of cohort indicators, d = ( d c5 , d c6 , d c8 ) identified by when each county is exposed to treatment (e.g., dental managed care). Once counties get exposed to treatment, they remain treated. Never treated cohorts have treatment occur in period ∞. These cohort indicators do not vary over time within county. Instead they identify counties in cohorts (i.e., states), that eventually get exposed to treatment. In our application, the cohort indicators vary by state. The cohorts in our case are Indiana, Missouri and Nebraska. The cohort indicators for Indiana, the first treatment state, are set to period 5 (Quarter 1, 2017), period 6 (Quarter 2, 2017) for Missouri, and period 8 (Quarter 4, 2017) for Nebraska. In the presence of and conditional on a vector of time‐constant pre‐treatment covariates, X c , we used a potential outcomes framework to define an average treatment effect on the treated (ATT), τ r t , for each treated cohort r = 5, 6, 8 and time periods t = 1,…,12 (1) τ rt ≡ E ( y t ( r ) − y t ( ∞ ) | d cr = 1 , X c ] where y t ( r ) , r ∈ { 5,6,8 } is the potential outcome in time period t if an observation enters treatment in time period r and y t ( ∞ ) is the potential outcome in time period t for an observation that is never treated. Included in the covariate vector X c are the following 2016 (pre‐reform) county‐level covariates: average county unemployment rate, dentists per capita, and median household income. These covariates vary by county. To identify cohort and time specific ATTs in the presence of a staggered policy intervention, we needed to assume linearity, a conditional no‐anticipation (CNA) assumption, and a conditional common/parallel trends (CCT) assumption. ( CNA ) E y t ( r ) − y t ( ∞ ) | d , X c = 0 for t < r ( CCT ) E y t ( ∞ ) − y 1 ( ∞ ) | d , X c ) = E ( y t ( ∞ ) − y 1 ( ∞ ) | X c for t = 2 , … , 12 The CNA assumption holds if the potential outcomes are the same, y t ( r ) = y t ( ∞ ) prior to exposure to treatment. The CCT assumption says that conditional on covariates X c (unemployment rate, dentists per capita, median household income) the average trend in the control cohort, in every period relative to the first period, does not depend on treatment status, which is captured by d . The CCT assumption implies that the average outcomes for treated groups and control groups (in this case never treated groups) would follow parallel paths in the absence of treatment. Assuming that all conditional expectations are linear in X c , meaning that conditional on d c r = 1 , X ˙ c = X c − E(X c | d c r = 1 ), and that the CNA and CCT conditions hold, we can identify ATTs and estimate the following equation by pooled OLS to generate the ETWFE estimator (2) E y c t | d , X c = γ + X c κ + ∑ r = 5,6,8 λ r d c r + ∑ r = 5,6,8 d c r ⋅ X c ζ r + ∑ s = 2 12 θ s f s t + ∑ s = 2 12 f s t ⋅ X c π t + ∑ r = 5,6,8 ∑ s = r 12 τ r s d c r ⋅ f s t + ∑ r = 5,6,8 ∑ s = r 12 d c r ⋅ f s t ⋅ X ˙ c r ρ r s where (3) X ˙ cr = X c − N r − 1 ∑ h = 1 N d hr X h f s t are time fixed‐effects and N r is the number of observations in cohort r . By centering the covariates around their cohort‐specific means in the last term in Equation ( ) and described in Equation ( ), we can recover the estimated τ r s as the ATTs. The last estimated coefficient vector, ρ r s , allows for heterogeneous treatment effects which are also called “moderating effects” by Wooldridge ( ). These coefficients can capture how treatment effects vary by various sub‐populations. In our context, treatment effects could vary by the economic health of a particular county which could be captured by the unemployment rate and median household income. The effect of the managed care reforms could also vary in counties that have a high or low supply of dentists as captured by county‐level dentists per capita. The ATTs, estimated from Equation ( ) can then be aggregated by cohort (e.g., state) and time to generate a cohort specific aggregate treatment effect (4) τ r ¯ ^ = 1 ( 12 − r + 1 ) ∑ t = r 12 τ rt ^ and an overall aggregate treatment effect. (5) τ ¯ ^ = 1 { O v e r a l l N u m b e r o f E s t i m a t e d A T T s } ∑ r = 5,6,8 ∑ t = r 12 τ rt ^ The standard errors for the aggregated treatment effects were estimated using the delta method. Given that the data is aggregated to the county‐quarter level, we weighted the counties included in Equation ( ) by their average Medicaid enrollment for children ages 0–18 from 2016 through 2018. To test for violation of the common trends assumption, we estimated a version of Equation ( ) by including cohort‐specific linear time trends. (6) E y c t | d , X c = γ + X c κ + ∑ r = 5,6,8 λ r d c r + ∑ r = 5,6,8 d c r ⋅ X c ζ r + ∑ s = 2 12 θ s f s t + ∑ s = 2 12 f s t ⋅ X c π t + ∑ r = 5,6,8 ∑ s = r 12 τ r s d c r ⋅ f s t + ∑ r = 5,6,8 ∑ s = r 12 d c r ⋅ f s t ⋅ X ˙ c r ρ r s + ∑ r = 5,6,8 ω r d c r t We then conducted the following joint test (7) H 0 : ω 5 = ω 6 = ω 8 = 0 Rejection of the null hypothesis in Equation ( ) indicates violation of the common trends assumption. Fortunately, if the common trends assumption is violated, including cohort‐specific time trends in Equation ( ) would act as a correction of the violation of the common trends assumption. Because Indiana, Missouri and Nebraska implemented their transitions to dental MCOs statewide, we clustered standard errors by state. We only considered the Missouri counties that transitioned to dental managed care during the sample period, as the counties that transitioned to dental managed care prior to 2017 were not included in the sample. To mitigate potential issues in calculating clustered standard errors in a difference‐in‐differences setting with a small number of treated groups (Cameron et al., ; Conley & Taber, ), we implemented an ordinary wild bootstrap where each county‐quarter observation is used as its own “cluster” to generate p ‐values and 95% confidence intervals as suggested by MacKinnon and Webb ( ) and Roodman et al. ( ).
RESULTS 5.1 Summary statistics In Table , we present summary statistics for the three states that transition to a dental MCO prior to their transition and our set of control states prior to the first observed MCO transition in our sample. For overall utilization (share of beneficiaries with a dental claim and dental visits per 10,000 beneficiaries) and across the various service categories (diagnostic, preventive, restorative, prophylaxis and fluoridation), pre‐reform utilization in Missouri was typically lower than in the other treatment states and the control states. Conversely, Nebraska prior to its transition to dental managed care had higher dental utilization levels than the other treatment and control states. The order of this pattern holds when one examines the various dental procedure categories (diagnostic, preventive, restorative, prophylaxis and fluoridation). The age and gender distribution was also very similar across the treatment and control states. In 2016, median household income was lower in the treatment states than in the control states. The number of dentists per capita was also lowest in Missouri (37.5) and highest in Nebraska (67.7). In the control states, the average county‐level number of dentists per capita was 57.2. 5.2 Main results Table reports the cohort and time specific ATTs for Indiana, Missouri and Nebraska in addition to the cohort specific treatment effects. Corresponding coefficient plots with 95% confidence intervals are shown in Figures and for the share of beneficiaries with a dental claim and the number of dental visits per 10,000 beneficiaries outcomes, respectively. In the first two quarters after its transition to dental managed care, Indiana had a large decline in dental care utilization. The share of beneficiaries with a dental claim declined by 10.5%–12% points ( p < 0.01) or by about 47%–54% in the first 6 months of the managed care transition in Indiana. Utilization rebounded after the 6 ‐month mark, but remained below pre‐reform levels. Specifically, in the fifth and sixth quarters following the dental managed care implementation in Indiana, the share of beneficiaries with a dental claim fell 2.2%–2.8% points ( p < 0.05) or by 10%–13.5% relative to the pre‐reform period. This general pattern is also present in the number of dental visits per 10,000 beneficiaries. Overall, in Indiana, the share of beneficiaries with a dental claim declined by 4.2% points ( p < 0.01) or by about 18%, and the number of visits per 10,000 beneficiaries declined by 594 visits ( p < 0.01) or by about 20.6% relative to the pre‐reform period. In the Missouri counties that transitioned to dental managed care in May 2017, the share of beneficiaries with a dental claim fell by 2.2%–3.5% points (12.1%–18.8%) in the first four quarters following the dental managed care transition relative to the pre‐reform level. In the following three quarters, relative to the pre‐reform baseline, there was no statistically significant change in dental care utilization. There was a statistically significant decline of 353–529 visits per 10,000 beneficiaries in the first four quarters in the first four quarters in Missouri following its transition to dental manage care, but visits rebounded 1 year after the transition. Across the Missouri counties that transitioned to dental managed care, the share of beneficiaries with a dental claim declined by 1.5% points, a statistically insignificant decline. The number of dental visits per 10,000 beneficiaries declined by 255 visits ( p < 0.10) or by 10.5% relative to the pre‐reform period. In Nebraska, utilization fell more modestly than in Indiana and Missouri, but the declines were statistically significant. In the first five quarters following its transition to dental managed care, relative to its baseline pre‐reform average, the share of beneficiaries with a dental claim fell by 0.8%–2.4% points (3.1%–8.8%). Visits per 10,000 beneficiaries fell by 177–388 visits (5%–11%) in the first five quarters following the reform. Overall, in Nebraska, the share of beneficiaries with a dental claim declined by 1.5% points ( p < 0.01) or by 5.9% relative to the state's pre‐reform period and the number of visits per 10,000 beneficiaries declined by 267 visits ( p < 0.01) or by 7.6%. Table reports the aggregated overall treatment effect ( ) as derived from Equation ( ). Pooling the three states together, the share of beneficiaries with a dental claim declined by 2.6% points ( p < 0.01) and visits per 10,000 beneficiaries fell by 394 ( p < 0.01). To assure our results were not driven by a violation in the common trends assumption, we re‐estimated the model with cohort specific linear time trends (Table ). For all dependent variables, the joint tests do not reject the null hypothesis of common trends. Given that the joint test does not reject the null hypothesis for any dependent variable, we did not include a cohort‐specific linear time trend in any final specification. There is weak evidence of a violation of the common trends assumption in Missouri for visits per 10,000 beneficiaries, but the coefficient on the linear time trend in Missouri is only marginally significant ( p < 0.10). 5.3 Robustness checks To examine the robustness of the cohort and quarter specific ATTs from the Wooldridge ( ) estimator, we also estimated a specification using the Callaway and Sant’Anna ( ) difference‐in‐differences estimator using doubly robust estimation as specified in Callaway and Sant’Anna ( ) and Sant’Anna and Zhao ( ). The cohort and quarter specific ATTs for Indiana, Missouri and Nebraska are presented in Table and the aggregate ATT across all 3 treatment states and post‐periods are presented in Table . Overall, our results when we apply the Callaway and Sant’Anna ( ) estimator are very similar to the main specification and imply qualitatively similar conclusions. The cohort and quarter specific ATTs for Indiana and Missouri are identical in sign and very close in magnitude to the main specification using the Wooldridge ( ) estimator. For Nebraska, the cohort and quarter specific ATTs are moderately larger in magnitude compared to the Wooldridge ( ) estimator. For the aggregated overall treatment effect, the traditional TWFE (Table ) and Callaway and Sant’Anna estimator (Table ) imply larger declines in the share of beneficiaries with a dental claim and visits per 10,000 beneficiaries than the Wooldridge estimator (Table ), with all three approaches yielding statistically significant effects. In conclusion, our results appear to be robust when applying the Callaway and Sant’Anna ( ) estimator and the traditional TWFE estimator. This may be due to the fact that there are few treated cohorts in our estimation sample and few cases when later treated units are compared to early treated units as shown by the Goodman‐Bacon ( ) decomposition (Figure and Table ). 5.4 Dental procedure categories To better understand how utilization declined in Indiana, Missouri and Nebraska following the transition to dental managed care, we also examined the change in utilization by dental procedure category. The aggregated ATTs over the entire post‐period imply the share of beneficiaries with a diagnostic claim (Table ) significantly declined by 4% and 1% points in Indiana and Nebraska, respectively. In all three states, there were statistically significant declines in the share of beneficiaries with a diagnostic claim in the first few quarters. Changes in preventive dental care utilization also followed a similar pattern in the three states that fully transitioned to dental managed care in 2017 (Table ). Overall, the share with a preventive dental care declined by 3.8% points (19.7%) in Indiana ( p < 0.01), 1.7% points (11.9%) in Missouri ( p < 0.10), and 1.1% points (5.0%) in Nebraska ( p < 0.01). These results are confirmed by the changes in the share of beneficiaries that receive prophylaxis and fluoridation dental care services (Tables and ). Interestingly, while the effects become smaller over time, there were still quarters one full year after the transition in which utilization was below pre‐reform levels in all states. Finally, the share of members receiving restorative dental care services experienced smaller declines than overall, diagnostic, and preventative dental services (Table ). In fact, only the aggregated ATT with respect to dental restorative utilization for Indiana was marginally statistically significant at the 10% level. Most of the decline in restorative dental care services in Indiana occurred in the first 6 months following the state's transition to dental managed care. In the subsequent quarters, restorative dental care utilization in Indiana returned to near pre‐reform levels. Given that restorative dental care services are medically necessary when a child has tooth pain or cavities, it should not be surprising that the dental managed care transition had less of an impact on these services than on diagnostic or preventive dental care services, which may be easier for a managed care entity to limit the use of or ration.
Summary statistics In Table , we present summary statistics for the three states that transition to a dental MCO prior to their transition and our set of control states prior to the first observed MCO transition in our sample. For overall utilization (share of beneficiaries with a dental claim and dental visits per 10,000 beneficiaries) and across the various service categories (diagnostic, preventive, restorative, prophylaxis and fluoridation), pre‐reform utilization in Missouri was typically lower than in the other treatment states and the control states. Conversely, Nebraska prior to its transition to dental managed care had higher dental utilization levels than the other treatment and control states. The order of this pattern holds when one examines the various dental procedure categories (diagnostic, preventive, restorative, prophylaxis and fluoridation). The age and gender distribution was also very similar across the treatment and control states. In 2016, median household income was lower in the treatment states than in the control states. The number of dentists per capita was also lowest in Missouri (37.5) and highest in Nebraska (67.7). In the control states, the average county‐level number of dentists per capita was 57.2.
Main results Table reports the cohort and time specific ATTs for Indiana, Missouri and Nebraska in addition to the cohort specific treatment effects. Corresponding coefficient plots with 95% confidence intervals are shown in Figures and for the share of beneficiaries with a dental claim and the number of dental visits per 10,000 beneficiaries outcomes, respectively. In the first two quarters after its transition to dental managed care, Indiana had a large decline in dental care utilization. The share of beneficiaries with a dental claim declined by 10.5%–12% points ( p < 0.01) or by about 47%–54% in the first 6 months of the managed care transition in Indiana. Utilization rebounded after the 6 ‐month mark, but remained below pre‐reform levels. Specifically, in the fifth and sixth quarters following the dental managed care implementation in Indiana, the share of beneficiaries with a dental claim fell 2.2%–2.8% points ( p < 0.05) or by 10%–13.5% relative to the pre‐reform period. This general pattern is also present in the number of dental visits per 10,000 beneficiaries. Overall, in Indiana, the share of beneficiaries with a dental claim declined by 4.2% points ( p < 0.01) or by about 18%, and the number of visits per 10,000 beneficiaries declined by 594 visits ( p < 0.01) or by about 20.6% relative to the pre‐reform period. In the Missouri counties that transitioned to dental managed care in May 2017, the share of beneficiaries with a dental claim fell by 2.2%–3.5% points (12.1%–18.8%) in the first four quarters following the dental managed care transition relative to the pre‐reform level. In the following three quarters, relative to the pre‐reform baseline, there was no statistically significant change in dental care utilization. There was a statistically significant decline of 353–529 visits per 10,000 beneficiaries in the first four quarters in the first four quarters in Missouri following its transition to dental manage care, but visits rebounded 1 year after the transition. Across the Missouri counties that transitioned to dental managed care, the share of beneficiaries with a dental claim declined by 1.5% points, a statistically insignificant decline. The number of dental visits per 10,000 beneficiaries declined by 255 visits ( p < 0.10) or by 10.5% relative to the pre‐reform period. In Nebraska, utilization fell more modestly than in Indiana and Missouri, but the declines were statistically significant. In the first five quarters following its transition to dental managed care, relative to its baseline pre‐reform average, the share of beneficiaries with a dental claim fell by 0.8%–2.4% points (3.1%–8.8%). Visits per 10,000 beneficiaries fell by 177–388 visits (5%–11%) in the first five quarters following the reform. Overall, in Nebraska, the share of beneficiaries with a dental claim declined by 1.5% points ( p < 0.01) or by 5.9% relative to the state's pre‐reform period and the number of visits per 10,000 beneficiaries declined by 267 visits ( p < 0.01) or by 7.6%. Table reports the aggregated overall treatment effect ( ) as derived from Equation ( ). Pooling the three states together, the share of beneficiaries with a dental claim declined by 2.6% points ( p < 0.01) and visits per 10,000 beneficiaries fell by 394 ( p < 0.01). To assure our results were not driven by a violation in the common trends assumption, we re‐estimated the model with cohort specific linear time trends (Table ). For all dependent variables, the joint tests do not reject the null hypothesis of common trends. Given that the joint test does not reject the null hypothesis for any dependent variable, we did not include a cohort‐specific linear time trend in any final specification. There is weak evidence of a violation of the common trends assumption in Missouri for visits per 10,000 beneficiaries, but the coefficient on the linear time trend in Missouri is only marginally significant ( p < 0.10).
Robustness checks To examine the robustness of the cohort and quarter specific ATTs from the Wooldridge ( ) estimator, we also estimated a specification using the Callaway and Sant’Anna ( ) difference‐in‐differences estimator using doubly robust estimation as specified in Callaway and Sant’Anna ( ) and Sant’Anna and Zhao ( ). The cohort and quarter specific ATTs for Indiana, Missouri and Nebraska are presented in Table and the aggregate ATT across all 3 treatment states and post‐periods are presented in Table . Overall, our results when we apply the Callaway and Sant’Anna ( ) estimator are very similar to the main specification and imply qualitatively similar conclusions. The cohort and quarter specific ATTs for Indiana and Missouri are identical in sign and very close in magnitude to the main specification using the Wooldridge ( ) estimator. For Nebraska, the cohort and quarter specific ATTs are moderately larger in magnitude compared to the Wooldridge ( ) estimator. For the aggregated overall treatment effect, the traditional TWFE (Table ) and Callaway and Sant’Anna estimator (Table ) imply larger declines in the share of beneficiaries with a dental claim and visits per 10,000 beneficiaries than the Wooldridge estimator (Table ), with all three approaches yielding statistically significant effects. In conclusion, our results appear to be robust when applying the Callaway and Sant’Anna ( ) estimator and the traditional TWFE estimator. This may be due to the fact that there are few treated cohorts in our estimation sample and few cases when later treated units are compared to early treated units as shown by the Goodman‐Bacon ( ) decomposition (Figure and Table ).
Dental procedure categories To better understand how utilization declined in Indiana, Missouri and Nebraska following the transition to dental managed care, we also examined the change in utilization by dental procedure category. The aggregated ATTs over the entire post‐period imply the share of beneficiaries with a diagnostic claim (Table ) significantly declined by 4% and 1% points in Indiana and Nebraska, respectively. In all three states, there were statistically significant declines in the share of beneficiaries with a diagnostic claim in the first few quarters. Changes in preventive dental care utilization also followed a similar pattern in the three states that fully transitioned to dental managed care in 2017 (Table ). Overall, the share with a preventive dental care declined by 3.8% points (19.7%) in Indiana ( p < 0.01), 1.7% points (11.9%) in Missouri ( p < 0.10), and 1.1% points (5.0%) in Nebraska ( p < 0.01). These results are confirmed by the changes in the share of beneficiaries that receive prophylaxis and fluoridation dental care services (Tables and ). Interestingly, while the effects become smaller over time, there were still quarters one full year after the transition in which utilization was below pre‐reform levels in all states. Finally, the share of members receiving restorative dental care services experienced smaller declines than overall, diagnostic, and preventative dental services (Table ). In fact, only the aggregated ATT with respect to dental restorative utilization for Indiana was marginally statistically significant at the 10% level. Most of the decline in restorative dental care services in Indiana occurred in the first 6 months following the state's transition to dental managed care. In the subsequent quarters, restorative dental care utilization in Indiana returned to near pre‐reform levels. Given that restorative dental care services are medically necessary when a child has tooth pain or cavities, it should not be surprising that the dental managed care transition had less of an impact on these services than on diagnostic or preventive dental care services, which may be easier for a managed care entity to limit the use of or ration.
CONCLUSION Between 2016 and the end of 2018, Indiana, Missouri and Nebraska transitioned their Medicaid pediatric dental benefits from a FFS model to a private managed care system. This study examines dental service utilization patterns associated with this transition. Utilizing an extended TWFE approach as proposed by Wooldridge ( ), we estimated ATTs for each state over time and calculated aggregate treatment effects across the three treatment states. The main analysis examined the proportion of pediatric Medicaid beneficiaries that had a dental claim and the number of dental visits per 10,000 beneficiaries. The theoretically predicted effect of the transition to managed care is mixed. Managed care organizations could enact strategies to decrease utilization in order to assure dental expenditures by the MCO are below the payments received from the state. Conversely, the MCO may also implement strategies that increase utilization, especially if MCOs promote the use of preventative services that could reduce later need for more costly restorative services. Our empirical results find that relative to states that continued to provide pediatric dental services exclusively on a FFS basis, there is evidence of a decline in dental services utilization following the transition to managed care. Pediatric dental care use among publicly insured children in Indiana declined by about 18% in the 2 years after the state's transition to dental managed care and by about 12–19% in the counties that transitioned to managed care in Missouri in the first year after the state's transition from FFS. The decline in utilization was more modest in Nebraska, but still fell by about 6%. The pattern in Indiana is particularly striking, with large declines in the first few months and then more modest declines in utilization 6 months after the transition to managed care. A qualitative interview with a local official in Indiana suggested that the transition was not smooth and the pattern seen in 2017 was due to complications in the implementation process. For example, in 2017, Indiana changed its provider credentialing systems and many dentists were not able to properly submit claims to the dental MCO during the first few months following the transition to managed care. While this may explain the decline in utilization in the first 6 months after the transition, the decline in pediatric Medicaid dental care utilization in Indiana persisted through much of 2018. It is possible that some dental providers stopped treating Medicaid beneficiaries due to difficulty in dealing with the administrative aspects of dental managed care, but this requires further research beyond the scope of this paper. It is difficult to tease out whether the persistent decrease in dental care utilization in Indiana is due to problems with claims processing or information technology systems or MCOs having an incentive to limit spending. It may be a combination of both. All states saw declines in utilization of dental services, with the largest declines in Indiana. The magnitude of the declines in Missouri and Nebraska were similar (even though the effect for Missouri was not always statistically significant). While our empirical analysis cannot test the exact reason for this pattern, one hypothesis is that states with more generous Medicaid FFS reimbursement have better access to care and beneficiaries in these states might be more affected by a transition to managed care (Layton et al., ). Our results suggest that this may be true. In 2016, Medicaid FFS rates in Indiana were at 69.2% of commercial rates; in Missouri Medicaid FFS rates were at 50% of commercial rates; and in Nebraska FFS Medicaid rates were at 59% of commercial rates for pediatric dental services (Gupta et al., ). This result is consistent with outcomes being worse in states that had more generous FFS rates. However, given that most of the decline in dental care utilization in Indiana occurred in the first two quarters of the dental managed care transition, possibly due to problems with the implementation process in the state, it is difficult to make any strong connection between pre‐existing FFS provider rates and subsequent outcomes under managed care. In conclusion, as states move to transition more Medicaid benefits, such as long‐term care, behavioral health, and dental services to managed care, additional research is warranted to assure beneficiaries can utilize services and care quality is not diminished. Our study shows that the recent transitions from FFS to managed care in the carved‐out service of dental care tend to result in lower utilization rates. Furthermore, the experience of Indiana shows that without properly implementing administrative and information technology systems, the transition to dental managed care can result in significant service delivery disruption for beneficiaries. While this reduction in utilization can be seen as a negative, our study is limited by not having measures of quality. If dental MCOs are better able to coordinate contact of beneficiaries with dental providers, this could result in better dental outcomes that require fewer dental visits, though this is unlikely. If this were the case, we would see fewer overall visits, fewer visits for restorative care, but no change in preventative visits. Instead, we found that dental care utilization declined across diagnostic and preventive service categories. Furthermore, we did not have payment information. This limited our ability to compare expenditures between public and private provision of pediatric Medicaid dental services. Our results highlight that MCOs may reduce utilization of dental services, but additional research is needed to understand the quality and cost implications of this decreased utilization.
The authors declare no conflict of interest.
|
Connecting the dots: A narrative review of the relationship between heart failure and cognitive impairment | 4a7e3d2a-ac67-4a44-b18c-c159732abb9b | 11911588 | Medicine[mh] | Heart failure (HF) is a prevalent condition, affecting at least 26 million people worldwide and posing a significant social and economic impact on public health. Based on a US investigation, healthcare costs for HF vary from $441 to $1585 per‐patient‐per‐month (PPPM), depending on the patient's condition, whether stable or experiencing a worsening HF event. This burden is expected to increase as the global population ages, with HF prevalence predicted to double within 40 years. Similarly, mild cognitive impairment (CI), defined as a statistical construct denoting performance on cognitive tests consistently below age‐ and education‐specific norms, and its severe form, dementia, are also associated with increasing social and economic burdens. Current estimates suggest that over 55 million people worldwide are affected by CI, though many mild cases may go unrecognized. In a German Health Economics study, the net annual costs of dementia vary by disease stage, ranging from €15 474 up to €41 808. Similar to HF, the prevalence of CI is higher in Western countries, primarily due to the older average age of the population. However, it is important to recognize that the lower prevalence of CI in middle‐ and low‐income countries may also be attributed to underdiagnosis, in addition to differences in population age. With the population aging in these regions as well, the number of individuals affected by dementia is projected to nearly double every 20 years worldwide, with a significant increase expected in developing countries. CI and HF often co‐occur in the aging population and they probably interact each other, even if the precise pathophysiological processes underlying this association are not yet fully understood. There is recognition that HF may be considered an independent predictor, though the universality and unequivocal nature of this prediction remain subjects of ongoing research. This is a non‐casual association: a significant portion of dementia cases can be attributed to cardiovascular risk factors, which could potentially be prevented through cardiovascular risk modification. Furthermore, HF can lead to varying degrees of CI, influenced by genetic factors (e.g., apolipoprotein E (APOE) and environmental elements. Hence, given the presence of several of the nine ‘aspects of non‐casual association’ discussed by Hill in 1965 (strength of association, consistency, specificity, temporality, biological gradient, plausibility, coherence, experiment and analogy), we can hypothesize a multifaceted relationship between HF and CI. In this narrative review, we will firstly describe the prevalence and analyse the risk of CI, in patients with HF. Second, we will discuss the possible pathophysiological mechanisms by which HF constitutes a risk factor for CI. Third, we will summarize the literature evidence on the effects of HF therapy on cognitive dysfunction. Finally, we discuss clinical implications and future treatment approaches in terms of therapeutic targets.
According to the existing literature, CI is highly prevalent among individuals with HF, with reported rates spanning from 3% to 80%, depending on the assessment methods for cognition ( Table ). Studies drawing from extensive cohorts, including data from International Classification of Diseases (ICD) records, consistently report a prevalence of almost 15% with some single study reporting different values. For instance, Lafo et al. observed a 36% prevalence of CI among a highly co‐morbid population of veterans hospitalized primarily for HF; this population had also a notably high rate of hospital readmission (30‐day readmission rate: 15%, 1‐year readmission rate: 59%) and mortality (30‐day mortality 6%, 1‐year mortality 41%). Biagi et al. reported a 35% prevalence in another extensively co‐morbid cohort of 1444 patients admitted with a diagnosis of chronic HF to Internal Medicine Department. Conversely, three studies reported a lower dementia prevalence (3.2%, 6.9% and 6.9%, respectively) considering a relatively low co‐morbid population. , , Unfortunately, for the majority of these cohort studies the clinical subtypes of dementia/CI (vascular dementia vs. Alzheimer's disease (AD) are not available. The only published prospective data on mild CI (MCI) and HF originates with fully data available have been presented within the COGNITION.MATTERS HF study, following 148 patients for up to 6 years, collecting data on cognitive function, brain imaging and inflammatory biomarkers. The study's principal findings revealed that HF patients exhibited selective deficits in attention (41%) and verbal memory domains (28%), with medial temporal lobe atrophy (potentially indicating underlying AD‐related pathology) identified as a probable structural correlate of cognitive dysfunction. Studies investigating the potential risk of CI development in HF patients remain limited and yield conflicting results ( Table ). Most studies suggest a significantly higher risk compared to the general population (with HR ranging from 1.3 to 1.7), while a few show non‐significant outcomes, possibly due to the relatively low number of HF patients in those studies. , , In contrast, a Swedish nationwide analysis reported a potentially protective role of HF against dementia risk in patients with atrial fibrillation (AF), hypothesizing that effective drug treatment in HF patients and higher mortality rates in the AF and HF subgroup might explain this unexpected result. A comprehensive meta‐analysis by J.A. Cannon et al. involving 37 studies, demonstrated a significantly elevated risk of CI in HF patients (prevalence 43%) compared to matched controls without HF. It is worth noting that patients affected by CI are often excluded from clinical studies, which may underestimate the actual prevalence. Included studies, however, exhibit significant heterogeneity, with dementia prevalence ranging from 10% to 79%. Another meta‐analysis by N. Li, which encompassed 119 studies, reported a pooled proportion of 41% in the 95 studies that examined CI using standardized tests. Several studies have explored the timing of cognitive decline in relation to the onset of HF. Most of these studies suggest that cognitive decline tends to manifest after the onset of HF rather than before it. For example, Sterling et al. found that the prevalence of CI 1–18 months before HF onset was comparable to that in control subjects (14.9% [11.7–18.6%] vs. 13.4% [11.6%–15.4%], P < 0.43). Similarly, Sun et al. reported a low prevalence of dementia in patients with newly diagnosed HF (4.2% in women and 2.5% in men). Regarding the impact of HF duration on cognitive decline, Hammond et al. reported a faster decline in Modified Mini‐Mental State Examination (MMSE) scores over 5 years following HF onset. Bressler et al. also found that the greatest six‐year decline in cognitive test scores was significantly associated with an increased risk of developing HF. In the ARIC study, Witt et al. observed that participants with HF had a higher prevalence of dementia (RRR = 1.60 [95% CI 1.13, 2.25]) and MCI (RRR = 1.36 [95% CI 1.12, 1.64]) at visit 5 (2011–2013), with a decline in cognition between visit 4 (1996–1998) and 5 that was greater in those who developed HF after visit 4. Long‐term data from Adelborg et al. indicated an increased risk of dementia in patients with HF over 1–35 years of follow‐up, with risk ratios of 1.21 (95% CI 1.18–1.24) during the first 10 years, 1.19 (95% CI 1.11–1.28) during 11–20 years, and 1.38 (95% CI 1.07–1.79) during 21–35 years. Ren et al. further reported that 11.0% of patients developed new‐onset dementia after a median follow‐up of 4.1 years (IQR: 1.2–10.2) post‐HF diagnosis, with a higher incidence in women (64%). Regarding the correlation between the severity of HF and the prevalence of CI, several studies have demonstrated a significant relationship. Lee et al. found that patients with New York Heart Association (NYHA) functional class II or higher were independently associated with cognitive decline. Similarly, Brunén et al. confirmed that patients with NYHA class III–IV had a higher prevalence of CI. The WARCEF trial highlighted an independent association between left ventricular ejection fraction (LVEF) and MMSE decline. On the other hand, the ARIC cognition study found no significant difference in the incidence of CI between HF with reduced ejection fraction (HFrEF) and HF with preserved ejection fraction (HFpEF) patients. Additionally, in acute decompensated HF, no significant disparity was observed in mean Montreal Cognitive Assessment (MoCA) scores or in the proportion of patients with MoCA scores below 26 between HFpEF and HFrEF groups. The link between LVEF and cognitive function may be nonlinear, suggesting a potentially exponential association, with a stronger impact observed at lower LVEF levels compared to higher ones. Lastly, the potential misdiagnosis of depressive symptoms as cognitive impairment in older HF patients must be considered, as this could affect the accuracy of CI prevalence estimates. Some studies, have noted the difficulty in differentiating between CI and depression in these patients, highlighting the need for careful clinical evaluation.
Evaluation of the complex interaction between the heart and brain is necessary to address patient prognosis and well‐being. The shared pathological pathways and risk factors, including AF, hypertension, obesity and type 2 diabetes mellitus (T2DM), provide vital clues to understanding the causal link between HF and CI. , Poor perfusion, microembolic events, ischaemic syndromes, cerebral inflammation, endothelial dysfunction with blood–brain barrier damage, and the presence of amyloid deposits may collectively contribute to unravelling the intricate relationship between HF and CI ( Figure ). Atrial fibrillation and cognitive impairment burden The age‐adjusted incidence rate of AF, reported as 1.33 per 1000 person‐years, exhibits a range from 0.13 per 1000 person‐years in individuals aged under 55 years, to 7.65 per 1000 person‐years in those aged 85 years or older. Multiple observational studies, along with several comprehensive meta‐analyses, have consistently highlighted that AF is closely linked to an elevated risk of CI and dementia. Notably, the extensive biracial population‐based ARIC‐NCS study, spanning two decades, found that individuals diagnosed with AF exhibited a substantially greater cognitive decline when compared to their counterparts who did not develop AF (HR 1.23, 95% confidence interval (CI) 1.04–1.45). Notably, even after rigorous adjustment for common confounding factors such as age, sex, education, apolipoprotein E, smoking, body mass index (BMI), arterial hypertension, T2DM, coronary heart disease and stroke, the association remains significant, especially among younger patients. This underscores the autonomous role of AF in accelerating cognitive dysfunction, prompting further inquiry into its potential causal contribution. The primary driver of AF‐induced CI is cerebral infarction. However, other proposed pathways include: Cerebral hypoperfusion : AF unsettles the heart's atrioventricular synchrony, leading to decreased cardiac output, stroke volume and blood pressure. Research in elderly populations with persistent AF has revealed a connection to reduced total cerebral blood flow and impaired whole brain perfusion, as assessed by phase contrast Magnetic Resonance Imaging (MRI) of the brain. Inflammation and systemic atherosclerotic vascular disease : inflammation is believed to promote hypercoagulability and thrombus formation, potentially increasing the risk of stroke and disrupting cerebrovascular regulation, which has links to AD and vascular dementia. Inflammation may also serve as a nonspecific marker of atherosclerotic vascular disease. Recent studies have further explored the connection between inflammation, atherosclerosis and AF, highlighting the potential impact on cognitive decline. Microhemorrhage : the relationship between the burden of cerebral microbleeds, often attributed to oral anticoagulant therapy, particularly in lobar locations (overlapping with cerebral amyloid angiopathy, CAA), and cognitive function is a subject of ongoing debate. , Recent evidence suggests that rhythm‐control strategies, particularly catheter ablation, are associated with a reduced risk of CI and dementia in patients with AF. A meta‐analysis by Guo et al. found that rhythm‐control therapy was significantly associated with a lower risk of future dementia (HR 0.74, 95% CI 0.62–0.89) compared to rate‐control strategies. Specifically, AF ablation was linked to significantly lower risks of overall dementia (HR 0.62, 95% CI 0.56–0.68), AD (HR 0.78, 95% CI 0.66–0.92) and vascular dementia (HR 0.58, 95% CI 0.42–0.80). Notably, an American study of 38 176 patients demonstrated that catheter ablation was associated with a 41% lower risk of dementia compared to antiarrhythmic drugs (1.9% vs. 3.3%; HR 0.59, 95% CI 0.52–0.67, P < 0.0001). Similarly, a nationwide cohort study in Korea involving 11 726 AF patients reported a reduced risk of dementia with catheter ablation compared to antiarrhythmic or rate‐control drugs alone (HR 0.73, 95% CI 0.58–0.93). Hypertension and cognitive impairment burden Hypertension affects approximately 1.4 billion adults, 31% of the adult population, worldwide. Recent findings from a comprehensive meta‐analysis conducted by Qin and colleagues revealed a noteworthy association between hypertension and MCI, with an overall pooled prevalence of 30%. Interestingly, MCI prevalence in hypertensive patients demonstrated regional disparities, with rates of 26% in Asia, 40% in Europe and 17% on a global scale. After age‐stratification, the prevalence of MCI was 44% (95% CI 1–86) in hypertensive patients under 60 years old compared to 28% (95% CI 24–32) in those aged 60 and above. It is important to note that the younger age group's data is based on only two studies, leading to a wide confidence interval. Mehra et al. reported a notably high prevalence of 66% for MCI in their study on the impact of hypertension on cognitive functions. In this study, 45.7% of participants had metabolic syndrome, a rate significantly higher than in the general population (9.2%–41%). Those with metabolic syndrome exhibited poorer cognitive performance across all domains of the MoCA, even after adjusting for age, education, depression severity and illness duration. Lower education levels, lower income and higher age were significantly associated with lower MoCA scores. Additionally, the use of MoCA, which may lack precision in individuals with lower education levels, could contribute to the high prevalence reported. Hypertension might play a pivotal role in the pathophysiology of CI, with a multifaceted impact on vascular structure. The potential connection between hypertension, HF and CI is driven by a complex interplay of mechanical, cellular and molecular factors that trigger vascular smooth muscle cell remodelling. Hypertension fosters the development and accumulation of atherosclerotic plaques in key arteries, including the carotid, vertebral and intracranial cerebral arteries. It is closely associated with or often preceded by arterial stiffening, attributed to various factors such as collagen deposition and elastin fragmentation. This stiffening elevates pulse pressure and enhances mechanical stress transmission through the cerebrovascular system, leading first to small vessel adaptive changes aimed at protecting the downstream microcirculation, and then to small penetrating arteries fibrotic thickening that are a common feature in both HF and dementia. Furthermore, microvascular rarefaction, which involves a reduction in vascular density encompassing both capillaries and arterioles, is observed in both human subjects and animal models of hypertension. , It is believed to result from the increased pressure transmitted to the microvascular bed. Given the limited presence of vessels in the white matter, this phenomenon may contribute to the development of white matter lesions. Cerebral microhaemorrhages are closely associated with hypertension , and are linked to compromised cognitive function. The progression of white matter hyperintensities and small vessel disease appears to correlate in several longitudinal studies with the duration of hypertension and the poor effectiveness of blood pressure control. Another characteristic of small vessel disease involves the enlargement of the perivascular space surrounding intracerebral arteries and veins. Obesity, type 2 diabetes mellitus and cognitive impairment burden In a nationwide survey conducted by the National Health and Nutrition Examination, the age‐adjusted prevalence of obesity in the US was recorded at 42.4% in 2017–2018, demonstrating variations across age groups: 40.0% for individuals aged 20–39, 44.8% for those aged 40–59, and 42.8% for adults aged 60 and above. Additionally, a robust, dose‐dependent relationship has been established, connecting higher BMI levels with an increased risk of HF, particularly in patients with HFpEF. , A multitude of deleterious pathological features, such as insulin resistance, gut dysbiosis, oxidative stress, inflammasome activation and systemic inflammation, are associated with obesity and T2DM. , , Each of these pathological features may contribute to neuroinflammation and brain injury. Chronic systemic inflammation is a hallmark of obesity and can be instigated by adipose tissue expansion (adipocyte hypertrophy and proliferation). Adipose tissue expansion promotes a hypoxic environment where adipocytes undergo apoptosis, causing further inflammation. Adipokines are of particular interest due to their ability to modulate insulin resistance, dysregulate the gut‐brain axis, and increase systemic inflammation, which may contribute to the development of neuroinflammation and dementia pathology. Impaired insulin signalling may be one of the early drivers of amyloid deposition in AD, showing how this morbidity can link the pathology of T2DM and AD. Research has shown that insulin degrading enzyme can break down amyloid beta (Aβ), indicating that insulin resistance could potentially contribute to changes in Aβ metabolism and increased amyloid pathology in AD. Insulin resistance in murine models has been shown to promote amyloid precursor protein (APP) phosphorylation and to increase the formation of amyloid plaques in the brain ; moreover, insulin resistance has been shown to contribute to neuroinflammation and neurodegeneration. , Physical inactivity and cognitive impairment burden It is crucial to underscore the pivotal role of physical inactivity in patients with HF, particularly in the context of cognitive function. Studies have compellingly demonstrated that physical inactivity is significantly associated with CI in HF patients, in terms of executive function, attention, processing speed and cognition screening scores. This evidence emphasizes the critical need for interventions aimed at reducing sedentary behaviour and increasing physical activity levels. Engaging in regular physical activity not only improves cognitive function but also holds potential for reducing depression and enhancing overall well‐being. Higher levels of physical activity, measured in terms of step count and time spent in moderate‐vigorous activity, have consistently shown positive correlations with improved cognitive function, while lower physical activity levels have been associated with cognitive dysfunction. Hyperlipidaemia and cognitive impairment burden Atherosclerosis, the hallmark of hyperlipidaemia, is a systemic process that affects large and small blood vessels throughout the body, including the brain. Furthermore, the brain is highly dependent on cholesterol for membrane structure and function. Dysregulation of cholesterol metabolism has been linked to CI, and it has been suggested that cholesterol‐lowering therapies could reduce the risk of cognitive dysfunction. ,
The age‐adjusted incidence rate of AF, reported as 1.33 per 1000 person‐years, exhibits a range from 0.13 per 1000 person‐years in individuals aged under 55 years, to 7.65 per 1000 person‐years in those aged 85 years or older. Multiple observational studies, along with several comprehensive meta‐analyses, have consistently highlighted that AF is closely linked to an elevated risk of CI and dementia. Notably, the extensive biracial population‐based ARIC‐NCS study, spanning two decades, found that individuals diagnosed with AF exhibited a substantially greater cognitive decline when compared to their counterparts who did not develop AF (HR 1.23, 95% confidence interval (CI) 1.04–1.45). Notably, even after rigorous adjustment for common confounding factors such as age, sex, education, apolipoprotein E, smoking, body mass index (BMI), arterial hypertension, T2DM, coronary heart disease and stroke, the association remains significant, especially among younger patients. This underscores the autonomous role of AF in accelerating cognitive dysfunction, prompting further inquiry into its potential causal contribution. The primary driver of AF‐induced CI is cerebral infarction. However, other proposed pathways include: Cerebral hypoperfusion : AF unsettles the heart's atrioventricular synchrony, leading to decreased cardiac output, stroke volume and blood pressure. Research in elderly populations with persistent AF has revealed a connection to reduced total cerebral blood flow and impaired whole brain perfusion, as assessed by phase contrast Magnetic Resonance Imaging (MRI) of the brain. Inflammation and systemic atherosclerotic vascular disease : inflammation is believed to promote hypercoagulability and thrombus formation, potentially increasing the risk of stroke and disrupting cerebrovascular regulation, which has links to AD and vascular dementia. Inflammation may also serve as a nonspecific marker of atherosclerotic vascular disease. Recent studies have further explored the connection between inflammation, atherosclerosis and AF, highlighting the potential impact on cognitive decline. Microhemorrhage : the relationship between the burden of cerebral microbleeds, often attributed to oral anticoagulant therapy, particularly in lobar locations (overlapping with cerebral amyloid angiopathy, CAA), and cognitive function is a subject of ongoing debate. , Recent evidence suggests that rhythm‐control strategies, particularly catheter ablation, are associated with a reduced risk of CI and dementia in patients with AF. A meta‐analysis by Guo et al. found that rhythm‐control therapy was significantly associated with a lower risk of future dementia (HR 0.74, 95% CI 0.62–0.89) compared to rate‐control strategies. Specifically, AF ablation was linked to significantly lower risks of overall dementia (HR 0.62, 95% CI 0.56–0.68), AD (HR 0.78, 95% CI 0.66–0.92) and vascular dementia (HR 0.58, 95% CI 0.42–0.80). Notably, an American study of 38 176 patients demonstrated that catheter ablation was associated with a 41% lower risk of dementia compared to antiarrhythmic drugs (1.9% vs. 3.3%; HR 0.59, 95% CI 0.52–0.67, P < 0.0001). Similarly, a nationwide cohort study in Korea involving 11 726 AF patients reported a reduced risk of dementia with catheter ablation compared to antiarrhythmic or rate‐control drugs alone (HR 0.73, 95% CI 0.58–0.93).
Hypertension affects approximately 1.4 billion adults, 31% of the adult population, worldwide. Recent findings from a comprehensive meta‐analysis conducted by Qin and colleagues revealed a noteworthy association between hypertension and MCI, with an overall pooled prevalence of 30%. Interestingly, MCI prevalence in hypertensive patients demonstrated regional disparities, with rates of 26% in Asia, 40% in Europe and 17% on a global scale. After age‐stratification, the prevalence of MCI was 44% (95% CI 1–86) in hypertensive patients under 60 years old compared to 28% (95% CI 24–32) in those aged 60 and above. It is important to note that the younger age group's data is based on only two studies, leading to a wide confidence interval. Mehra et al. reported a notably high prevalence of 66% for MCI in their study on the impact of hypertension on cognitive functions. In this study, 45.7% of participants had metabolic syndrome, a rate significantly higher than in the general population (9.2%–41%). Those with metabolic syndrome exhibited poorer cognitive performance across all domains of the MoCA, even after adjusting for age, education, depression severity and illness duration. Lower education levels, lower income and higher age were significantly associated with lower MoCA scores. Additionally, the use of MoCA, which may lack precision in individuals with lower education levels, could contribute to the high prevalence reported. Hypertension might play a pivotal role in the pathophysiology of CI, with a multifaceted impact on vascular structure. The potential connection between hypertension, HF and CI is driven by a complex interplay of mechanical, cellular and molecular factors that trigger vascular smooth muscle cell remodelling. Hypertension fosters the development and accumulation of atherosclerotic plaques in key arteries, including the carotid, vertebral and intracranial cerebral arteries. It is closely associated with or often preceded by arterial stiffening, attributed to various factors such as collagen deposition and elastin fragmentation. This stiffening elevates pulse pressure and enhances mechanical stress transmission through the cerebrovascular system, leading first to small vessel adaptive changes aimed at protecting the downstream microcirculation, and then to small penetrating arteries fibrotic thickening that are a common feature in both HF and dementia. Furthermore, microvascular rarefaction, which involves a reduction in vascular density encompassing both capillaries and arterioles, is observed in both human subjects and animal models of hypertension. , It is believed to result from the increased pressure transmitted to the microvascular bed. Given the limited presence of vessels in the white matter, this phenomenon may contribute to the development of white matter lesions. Cerebral microhaemorrhages are closely associated with hypertension , and are linked to compromised cognitive function. The progression of white matter hyperintensities and small vessel disease appears to correlate in several longitudinal studies with the duration of hypertension and the poor effectiveness of blood pressure control. Another characteristic of small vessel disease involves the enlargement of the perivascular space surrounding intracerebral arteries and veins.
In a nationwide survey conducted by the National Health and Nutrition Examination, the age‐adjusted prevalence of obesity in the US was recorded at 42.4% in 2017–2018, demonstrating variations across age groups: 40.0% for individuals aged 20–39, 44.8% for those aged 40–59, and 42.8% for adults aged 60 and above. Additionally, a robust, dose‐dependent relationship has been established, connecting higher BMI levels with an increased risk of HF, particularly in patients with HFpEF. , A multitude of deleterious pathological features, such as insulin resistance, gut dysbiosis, oxidative stress, inflammasome activation and systemic inflammation, are associated with obesity and T2DM. , , Each of these pathological features may contribute to neuroinflammation and brain injury. Chronic systemic inflammation is a hallmark of obesity and can be instigated by adipose tissue expansion (adipocyte hypertrophy and proliferation). Adipose tissue expansion promotes a hypoxic environment where adipocytes undergo apoptosis, causing further inflammation. Adipokines are of particular interest due to their ability to modulate insulin resistance, dysregulate the gut‐brain axis, and increase systemic inflammation, which may contribute to the development of neuroinflammation and dementia pathology. Impaired insulin signalling may be one of the early drivers of amyloid deposition in AD, showing how this morbidity can link the pathology of T2DM and AD. Research has shown that insulin degrading enzyme can break down amyloid beta (Aβ), indicating that insulin resistance could potentially contribute to changes in Aβ metabolism and increased amyloid pathology in AD. Insulin resistance in murine models has been shown to promote amyloid precursor protein (APP) phosphorylation and to increase the formation of amyloid plaques in the brain ; moreover, insulin resistance has been shown to contribute to neuroinflammation and neurodegeneration. ,
It is crucial to underscore the pivotal role of physical inactivity in patients with HF, particularly in the context of cognitive function. Studies have compellingly demonstrated that physical inactivity is significantly associated with CI in HF patients, in terms of executive function, attention, processing speed and cognition screening scores. This evidence emphasizes the critical need for interventions aimed at reducing sedentary behaviour and increasing physical activity levels. Engaging in regular physical activity not only improves cognitive function but also holds potential for reducing depression and enhancing overall well‐being. Higher levels of physical activity, measured in terms of step count and time spent in moderate‐vigorous activity, have consistently shown positive correlations with improved cognitive function, while lower physical activity levels have been associated with cognitive dysfunction.
Atherosclerosis, the hallmark of hyperlipidaemia, is a systemic process that affects large and small blood vessels throughout the body, including the brain. Furthermore, the brain is highly dependent on cholesterol for membrane structure and function. Dysregulation of cholesterol metabolism has been linked to CI, and it has been suggested that cholesterol‐lowering therapies could reduce the risk of cognitive dysfunction. ,
The connection between HF and CI extends beyond shared risk factors; HF itself can potentially contribute to cognitive dysfunction. Complex interactions occur at multiple levels between AD hallmarks, such as extracellular senile plaques rich in Aβ peptide, intraneuronal neurofibrillary tangles (NFTs) composed of hyperphosphorylated microtubule‐binding protein tau (p‐tau), and key CV disease features, including neuroinflammation, cerebrovascular dysfunction, blood–brain barrier (BBB) injury and cerebral amyloid angiopathy (CAA). , Recent evidences suggests that AD and cerebrovascular dementia are part of a disease continuum, where intersecting pathways can favour either vascular or parenchymal amyloid deposition. The primary amyloid peptide in parenchymal lesions of AD is Aβ1–42, while Aβ1–40 is more prevalent in peripheral atherosclerotic lesions. Factors altering the Aβ1–40/−42 ratio, like APOE, favour amyloid deposits in the form of cerebral amyloid angiopathy rather than parenchymal plaques. This preference for Aβ species in different tissues may result from substantial Aβ1–40 production by platelets, plaque‐invading macrophages, endothelial cells and vascular smooth muscle cells. Additionally, APOE isoforms have varying effects on Aβ production, aggregation and clearance. Specifically, the APOEe4 allelic variant represents the most potent genetic risk factor for sporadic AD and identifies a distinct clinicopathological entity, while APOEe2 is associated with a lower AD risk. Nevertheless, evidence suggests that APOEe4 is linked to compromised cerebrovascular integrity and function, contributing to blood–brain barrier dysfunction and serving as a risk factor for CI due to cerebrovascular dementia, both in the presence and absence of AD pathology. Heart failure pharmacotherapy and cognitive impairment The effect of pharmacological therapy for HF on cognitive dysfunction has been a topic of investigation, with specific focus on certain medications. While anticoagulation's established role in reducing incident dementia in AF patients is well‐known, its impact on HF patients without known AF remains uncertain. Experimental studies suggest that anticoagulant agents, such as heparin and enoxaparin, may inhibit amyloid beta neurotoxic effects through their glycosaminoglycan structure, affecting APP function and BACE1 activity. The evidence regarding their effect on cognitive test performance and covert infarcts in stable CAD or peripheral artery disease patients treated with rivaroxaban and aspirin is inconclusive. ARNIs are theorized to potentially impact cognitive function by affecting Aβ peptides in the central nervous system. Neprilysin inhibition could reduce their breakdown, while increased bradykinin levels may damage the BBB and contribute to amyloid plaque deposition. The real‐world analysis of the PARADIGM‐HF trial did not demonstrate these effects. Similarly, inhibition of angiotensin‐converting enzyme (ACE) may influence the availability of amyloid beta peptides. ACE inhibitors (ACEIs) have been found to increase levels of Ab1–42, while the results for Ab1–40 levels have shown inconsistency, with either an increase or no change observed. While pathophysiological hypotheses and in‐vitro studies suggest potential mechanisms, clinical data also managed to shed light on this matter. For instance, in a study involving 1220 patients admitted with HF, abbreviated mental test scores improved from admission to discharge in 30% of patients after the initiation of ACEIs, compared to 22% of HF patients not receiving these drugs (odds ratio, 1.6 [95% CI, 1.2–2.1]). The role of sodium‐glucose cotransporter‐2 inhibitors (SGLT2Is) is also being explored; in a prospective study of 162 frail patients with diabetes, HFpEF and baseline MoCA scores <26, empagliflozin monotherapy correlated with improved MoCA scores 1 month after admission, whereas treatment with insulin or metformin did not. It is important to note that the authors did not control for treatment duration or serum glucose levels. These findings suggest a potential cognitive benefit associated with SGLT2Is in HF patients, and further research is needed to elucidate the mechanisms behind these effects and to explore the long‐term cognitive implications of this therapy. Recent studies have shown promising outcomes for glucagon‐like peptide‐1 receptor agonists (GLP‐1 RAs) in obese patients with HFpEF. Four meta‐analyses have been conducted to assess the influence of GLP‐1 RAs on cognitive function in individuals with T2DM. , , , However, it is important to note that none of these analyses specifically investigated the cognitive effects in patients with HF, highlighting the need for further research to better understand the potential effects of GLP‐1 RAs on cognitive function. Despite the significance of CI potentially linked with HF, major pharmacological trials, including those involving ARNIs and SGLT2Is, have not extensively reported on this aspect. It may not be feasible to include dementia patients in these trials, but the inclusion of patients with MCI could provide valuable insights, considering the potential impact on adherence and treatment efficacy.
The effect of pharmacological therapy for HF on cognitive dysfunction has been a topic of investigation, with specific focus on certain medications. While anticoagulation's established role in reducing incident dementia in AF patients is well‐known, its impact on HF patients without known AF remains uncertain. Experimental studies suggest that anticoagulant agents, such as heparin and enoxaparin, may inhibit amyloid beta neurotoxic effects through their glycosaminoglycan structure, affecting APP function and BACE1 activity. The evidence regarding their effect on cognitive test performance and covert infarcts in stable CAD or peripheral artery disease patients treated with rivaroxaban and aspirin is inconclusive. ARNIs are theorized to potentially impact cognitive function by affecting Aβ peptides in the central nervous system. Neprilysin inhibition could reduce their breakdown, while increased bradykinin levels may damage the BBB and contribute to amyloid plaque deposition. The real‐world analysis of the PARADIGM‐HF trial did not demonstrate these effects. Similarly, inhibition of angiotensin‐converting enzyme (ACE) may influence the availability of amyloid beta peptides. ACE inhibitors (ACEIs) have been found to increase levels of Ab1–42, while the results for Ab1–40 levels have shown inconsistency, with either an increase or no change observed. While pathophysiological hypotheses and in‐vitro studies suggest potential mechanisms, clinical data also managed to shed light on this matter. For instance, in a study involving 1220 patients admitted with HF, abbreviated mental test scores improved from admission to discharge in 30% of patients after the initiation of ACEIs, compared to 22% of HF patients not receiving these drugs (odds ratio, 1.6 [95% CI, 1.2–2.1]). The role of sodium‐glucose cotransporter‐2 inhibitors (SGLT2Is) is also being explored; in a prospective study of 162 frail patients with diabetes, HFpEF and baseline MoCA scores <26, empagliflozin monotherapy correlated with improved MoCA scores 1 month after admission, whereas treatment with insulin or metformin did not. It is important to note that the authors did not control for treatment duration or serum glucose levels. These findings suggest a potential cognitive benefit associated with SGLT2Is in HF patients, and further research is needed to elucidate the mechanisms behind these effects and to explore the long‐term cognitive implications of this therapy. Recent studies have shown promising outcomes for glucagon‐like peptide‐1 receptor agonists (GLP‐1 RAs) in obese patients with HFpEF. Four meta‐analyses have been conducted to assess the influence of GLP‐1 RAs on cognitive function in individuals with T2DM. , , , However, it is important to note that none of these analyses specifically investigated the cognitive effects in patients with HF, highlighting the need for further research to better understand the potential effects of GLP‐1 RAs on cognitive function. Despite the significance of CI potentially linked with HF, major pharmacological trials, including those involving ARNIs and SGLT2Is, have not extensively reported on this aspect. It may not be feasible to include dementia patients in these trials, but the inclusion of patients with MCI could provide valuable insights, considering the potential impact on adherence and treatment efficacy.
CI and brain health in HF patients have profound clinical implications, evident in a meta‐analysis of over 10 000 individuals, highlighting reduced treatment adherence, compromised self‐care abilities and a decreased likelihood of seeking assistance among those with CI. While HF guidelines underscore the importance of recognizing cognitive impairment, especially in the context of frailty, communication challenges and end‐of‐life decisions, they currently lack specific recommendations for routine screening or diagnosis in clinical practice. Additionally, insights from the Registro de Insuficiencia Cardiaca (RICA) suggest that HF patients with severe CI face heightened mortality and morbidity risks, characterized by advanced age, increased co‐morbidity burden, lower survival rates and a higher incidence of death or readmission at 1 year. Nevertheless, acknowledging this association is crucial as it influences the risk/benefit ratio of various therapeutic interventions. The efficacy of pharmacotherapy and procedural interventions may be constrained if patients have limited life expectancy to derive benefits. For instance, if life expectancy is less than 1 year, the probability of benefit from interventions such as implantable cardioverter‐defibrillators or transcatheter mitral/aortic valve procedures is low, and hence not warranted. Beyond clinical implications, the economic impact of cognitive impairment in HF warrants attention, emphasizing the need for a comprehensive bioeconomical assessment to address the multifaceted challenges posed by this condition. To address a promptly diagnose of MCI in patients with HF, van Nieuwkerk and colleagues have proposed a two‐step algorithm. The first step involves inquiring about substantial cognitive decline noted by patients or their relatives over the past year, followed by assessing its impact on daily life. A positive response triggers cognitive screening, as some patients with severe deficits may not perceive their cognitive problems. Clinical suspicion, raised by unexplained falls, medication errors, history of delirium, or depressive symptoms, also warrants screening, especially in patients over 65. The second step involves a multidomain cognitive screening test tailored to the patient's baseline cognitive level. Given that executive functioning and memory are commonly affected in patients with cardiac conditions, screening tools should cover these domains. The MoCA and MMSE are commonly used for global cognition assessment. MMSE, with a cut‐off of 24 points, exhibits good sensitivity and negative predictive value but lower specificity for dementia. MoCA, designed for MCI, is more sensitive but has lower specificity for dementia. An optimal MoCA cut‐off may be 26, showing excellent sensitivity and negative predictive value but poorer specificity for dementia. The Informant Questionnaire on Cognitive Decline in the Elderly (IQCODE) is useful for proxies in assessing cognitive trajectories. Multidomain cognitive screening tool, such as MoCA or the MMSE or similar digital cognitive testing stratify risk and inform the need for referral to specialist memory services. HF patients might be affected by multiple domains CI (including attention, executive function, language, memory and visual–spatial capacities). The inclusion of standardized neurocognitive outcomes within cardiovascular trials could serve as a beacon for identifying high‐risk patients. NeuroARC, with its commitment to standardized neuropsychological endpoint definitions, advocates for the integration of cognitive screening at each trial visit, possibly leveraging the MoCA as a valuable tool. The current landscape lacks reliable imaging and blood biomarkers for identifying individuals at risk for AD or cognitive decline. , Integrating such biomarkers into screening strategies could significantly advance the identification of high‐risk patients and improve overall cognitive assessment in the HF population.
A recent statement from the American Heart Failure Society highlights that there are no specific interventions proven to improve cognition or delay the progression of CI in patients with HF. Potential approaches include the treatment of contributing factors, the promotion of physical activity, leaving specific advanced neurologic pharmacotherapies for selected cases ( Figure ). Given the intricate nature of medication schedules and the prevalence of medication‐related issues among individuals with HF and CI, physicians must meticulously assess and reconcile medication plans. Implementing strategies to enhance outcomes in this context may involve deprescribing, a supervised process of medication discontinuation. Candidate medications for deprescribing might include those known to exacerbate HF and/or listed in criteria such as the Beers criteria, which identifies medications with potential risks outweighing benefits in older adults, particularly concerning cognition. Additionally, treating co‐morbidities can further improve outcomes in this patient population (i.e., pressure control in hypertensive patients and rhythm control in AF patients. Despite the pleiotropic effect of exercise training in HF, including enhancements in exercise capacity, cardiac function, peripheral effects and quality of life, limited research has explored the impact of physical exercise on individuals with HF and MCI. Tanne et al. demonstrated cognitive improvements in 20 severe HF patients after an 18‐week supervised exercise programme. Redwine et al. reported similar enhancements in cognitive performance in 69 HF participants with tai chi and resistance band exercises over 16 weeks. However, Kitzman et al. did not observe cognitive improvements in 349 HF patients following a 12‐week rehabilitation programme (multi domain physical rehabilitation programme emphasizing strength, balance, mobility, and endurance). Conversely, Gary et al. found improved verbal memory in 69 HF participants with combined exercise and cognitive training over 3 months. The aforementioned studies have poorly defined interventions, and short follow‐up durations, thereby posing a high risk of bias. New pharmacotherapeutic agents have emerged for addressing distinct categories of cognitive dysfunction. Cholinesterase inhibitors (such as donepezil, rivastigmine and galantamine) are presently advocated for managing MCI and dementia associated with AD. However, the utility of cholinesterase inhibitors in cognitive impairment unrelated to AD (for example in vascular CI) is not supported by evidence; this raises concerns regarding heightened risk of adverse effects (e.g., cardiac and gastrointestinal), alongside varying levels of evidence quality. Consequently, consensus guidelines do not advocate for the use of cholinesterase inhibitors in treating cognitive impairment not associated with AD due to the off‐label nature and lack of evidence‐based support for such usage. Recently, disease‐altering monoclonal antibodies have gained approval for treating individuals with MCI and dementia arising from AD. Phase 3 trials have demonstrated the efficacy of monoclonal antibodies (aducanumab‐avwa; lecanemab‐irmb) in diminishing amyloid‐beta plaques within the brain, resulting in moderate alleviation of CI. , However, both treatments are associated with an increased vulnerability to amyloid‐related imaging abnormalities, including cerebral microhemorrhages, cerebral macrohemorrhages, superficial siderosis, brain oedema, or sulcal effusion. Subjects with HF were excluded from these clinical trials, considering the widespread use of anticoagulation in HF patients. However, the benefit and targets of disease‐modifying treatment beyond classical AD is still ongoing and definitively need to be updated according to new findings and prospective results.
In conclusion, the intricate relationship between HF and CI underscores the urgent need for a comprehensive approach to patient care. Both conditions are subject to similar disease processes and pathophysiological mechanisms, emphasizing their close interplay. The profound impact of CI on HF self‐care and independence further emphasizes the importance of addressing cognitive health in HF management, with significant implications for patient prognosis and quality of life. This accentuates the critical need for treatment strategies capable of addressing the cognitive dysfunction associated with HF, leading to improved patient brain health and well‐being.
Mauro Massussi, Maria Giulia Bellicini, and Riccardo Proietti declare that they have no conflicts of interest relevant to the content of this work to disclose. Marianna Adamo received speaker fees from Abbott Vascular and Medtronic, outside of the submitted work. M. Metra received consulting honoraria for participation in steering committees or advisory boards or for speeches from Abbott Vascular, Amgen, AstraZeneca, Bayer, Edwards, Fresenius, Novartis, and Servier, outside of the submitted work. Alessandro Padovani has served on the scientific advisory board of GE Healthcare, Eli Lilly, and Actelion Pharmaceuticals; and has received speaker honoraria from Nutricia, IAM Pharmaceuticals, Lansgstone Technology, GE Healthcare, Eli Lilly, UCB Pharma, and Chiesi Pharmaceuticals, outside of the submitted work. Andrea Pilotto has served on the scientific advisory board of Z‐cube (technology division of Zambon Pharma) and has received speaker honoraria from Biomarin and Zambon Pharmaceuticals, outside of the submitted work.
|
Maggot debridement therapy stimulates wound healing by altering macrophage activation | 37e08e3c-75da-4a02-aef1-e9a23d385dd8 | 10898370 | Debridement[mh] | INTRODUCTION Polymorphonuclear cells (PMN) are the primary cells that are attracted to the wound site during regular wound healing. These cells decontaminate the wound by phagocytosing bacteria and generating chemokines and cytokines that attract and activate macrophages within the wound. It is well known that macrophages undertake phagocytosis of bacteria, remove damaged tissue and produce growth factors. Both classically activated macrophages (M1) and alternatively activated macrophages (M2) are present in lesions, affecting the functioning of macrophages. , M1 macrophages are responsible for eliminating pathogens and debriding wounds by removing the dying cells and debris. In addition, they create proteases that aid in tissue degradation. Interleukin (IL)‐4 and IL‐13, by contrast, facilitate the activation of M2 macrophages. These macrophages exhibit several markers, such as mannose receptors, L‐arginase 1, Dectin‐1, FIZZ1 and Ym1. In addition, the expression of pro‐inflammatory cytokines such as IL‐1, IL‐6 and tumour necrosis factor (TNF) is reduced. M2 macrophages are known to produce anti‐inflammatory mediators including TGF‐β, IL‐10 and IL‐4. These mediators serve an essential function in promoting angiogenesis and facilitating the resolution of inflammation. However, there are no reported phenotypic changes associated with diabetic foot ulcers (DFU) in macrophages. Maggot debridement therapy (MDT) is currently employed around the world to treat chronically infected wounds. , It is considered that the use of MDT has several impacts on the preparation of the wound bed. First, it effectively eliminates nonviable tissue. Second, it also supports in the battle against infection by lowering the bioburden. Lastly, it aids the wound remodelling process. Antimicrobial properties, anti‐inflammatory effects, neo angiogenesis reduction and wound healing improvement of maggot larvae and their secretions have been documented in previous research. , , Previous research has not yet established a mechanism for the clinical effectiveness of MDT in debridement and wound healing, despite its demonstrated clinical success in these areas. , To gain a better understanding of the healing process of diabetic wounds, our objective was to determine the phenotype of activated macrophages in wounds both pre‐ and post‐MDT, as well as to investigate the events of macrophage activation in the diabetic environment before and after MDT.
MATERIALS AND METHODS 2.1 Patients Between August 2018 and December 2018, 107 patients diagnosed with diabetic foot ulcers (DFU) were randomly selected from cases at Nanjing Junxie Hospital in China to participate in this study. Patients with wounds exhibiting soft necrotic tissue, adhesive exfoliated tissue and antibiotic‐resistant conditions underwent maggot therapy. Specifically, individuals classified as Class 2B and Class 3B according to the Texas Categorization system were included. The Texas system categorizes wounds into different grades: Grade 0 indicates fully healed ulcerated wounds; Grade 1 refers to superficial wounds that do not extend to joints, tendons or bones; Grade 2 involves wounds penetrating joint capsules and tendons; and Grade 3 comprises wounds reaching the bone joint. Wounds were also classified into four stages: clean wounds, nonischemic infected wounds, ischemic noninfected wounds and ischemic infected wounds. MDT was applied to 54 patients with type 2 diabetes mellitus (T2DM) and DFU (experimental group), while 53 patients diagnosed with T2DM and DFU, not undergoing MDT, served as the control group. Demographic characteristics of both groups are summarized in Table . Post‐MDT administration, typically within 24–72 h, 1 cm 3 tissue samples were extracted from the central wound region of each participant's foot in the experimental group, following maggot removal but before any surgical intervention. Similarly, 1 cm 3 of peripheral lesion tissue was collected from each participant in the control group, poststandard debridement procedures but presurgical intervention. This study received approval from the ethical committee of Nanjing Junxie Hospital, and informed consent was obtained from all participants. 2.2 Immunohistochemistry analysis Wound biopsies were fixed in 10% neutrally buffered formalin for 72 h. Subsequently, 5 mm paraffin sections were used for immunohistochemistry (IHC) analysis. The sections were first incubated with the primary antibodies, anti‐ECF‐L (Ym1) from R&D Systems, Minneapolis, MN and anti‐Gr‐1 from R&D Systems, for 1 h at 25°C. After that, they were incubated with a biotinylated secondary antibody, specifically goat anti‐Human secondary antibody obtained from Vector Laboratories, Burlingame, CA. The sections were then processed using the VECTASTAIN ABC‐AP kit from Vector Laboratories for 30 min. Following this, the sections were stained with an alkaline phosphatase red substrate and then counterstained with haematoxylin for visualization. 2.3 Gene expression analysis RNA was extracted from wound biopsies collected before and after MDT utilizing the PerfectPure RNA Isolation Kit for Fibrous Tissue (5 Prime Inc., Gaithersburg, MD). The purity of the extracted RNA was evaluated with an RT2 Profiler PCR quality control kit (SABiosciences, Frederick, MD). The mRNA levels of 84 distinct inflammatory cytokines were assessed using a real‐time PCR array (Array #PAMM‐011; SABiosciences) focusing on the inflammatory pathway, following the manufacturer's guidelines. The RNA samples were converted to complementary DNA (cDNA) using the RT2 First Strand Kit (SABiosciences), and the cDNA samples were combined with RT2 SYBR Green PCR Master Mix (SABiosciences). The presence of inflammatory cytokines in the samples was determined through RT2 Profiler PCR arrays (SABiosciences), including cytokine genes IFNG, IL‐10, TGF‐β1 and TNF. The specificity of amplification for each sample was confirmed using dissociation curve analysis, and agarose gel electrophoresis validated the amplicon size. The relative expression levels of each gene were calculated using the Delta–Delta CT method, with GAPDH serving as the reference gene. 2.4 Excretion/secretion collection Lucilia sericata larvae that were incubated and kept under sterile conditions were used for MDT. Excretion/secretion (ES) products were derived from the third instar larvae. Twenty larvae were washed with 10 mL of sterile phosphate‐buffered saline after being stored at 37°C for 48 h, and the ES product was then collected. The protein concentrations were measured using a colorimetric protein assay in accordance with the manufacturer's guidelines (Bio‐Rad Laboratories). 2.5 Data analysis Before conducting the analysis, thorough screening of the data was conducted to guarantee accuracy and normality. Appropriate parametric statistical tests were applied to examine differences between groups. In cases where the assumptions of parametric tests were not met, alternative nonparametric statistical tests were employed. All analyses were carried out with a significance level set at p < 0.05.
Patients Between August 2018 and December 2018, 107 patients diagnosed with diabetic foot ulcers (DFU) were randomly selected from cases at Nanjing Junxie Hospital in China to participate in this study. Patients with wounds exhibiting soft necrotic tissue, adhesive exfoliated tissue and antibiotic‐resistant conditions underwent maggot therapy. Specifically, individuals classified as Class 2B and Class 3B according to the Texas Categorization system were included. The Texas system categorizes wounds into different grades: Grade 0 indicates fully healed ulcerated wounds; Grade 1 refers to superficial wounds that do not extend to joints, tendons or bones; Grade 2 involves wounds penetrating joint capsules and tendons; and Grade 3 comprises wounds reaching the bone joint. Wounds were also classified into four stages: clean wounds, nonischemic infected wounds, ischemic noninfected wounds and ischemic infected wounds. MDT was applied to 54 patients with type 2 diabetes mellitus (T2DM) and DFU (experimental group), while 53 patients diagnosed with T2DM and DFU, not undergoing MDT, served as the control group. Demographic characteristics of both groups are summarized in Table . Post‐MDT administration, typically within 24–72 h, 1 cm 3 tissue samples were extracted from the central wound region of each participant's foot in the experimental group, following maggot removal but before any surgical intervention. Similarly, 1 cm 3 of peripheral lesion tissue was collected from each participant in the control group, poststandard debridement procedures but presurgical intervention. This study received approval from the ethical committee of Nanjing Junxie Hospital, and informed consent was obtained from all participants.
Immunohistochemistry analysis Wound biopsies were fixed in 10% neutrally buffered formalin for 72 h. Subsequently, 5 mm paraffin sections were used for immunohistochemistry (IHC) analysis. The sections were first incubated with the primary antibodies, anti‐ECF‐L (Ym1) from R&D Systems, Minneapolis, MN and anti‐Gr‐1 from R&D Systems, for 1 h at 25°C. After that, they were incubated with a biotinylated secondary antibody, specifically goat anti‐Human secondary antibody obtained from Vector Laboratories, Burlingame, CA. The sections were then processed using the VECTASTAIN ABC‐AP kit from Vector Laboratories for 30 min. Following this, the sections were stained with an alkaline phosphatase red substrate and then counterstained with haematoxylin for visualization.
Gene expression analysis RNA was extracted from wound biopsies collected before and after MDT utilizing the PerfectPure RNA Isolation Kit for Fibrous Tissue (5 Prime Inc., Gaithersburg, MD). The purity of the extracted RNA was evaluated with an RT2 Profiler PCR quality control kit (SABiosciences, Frederick, MD). The mRNA levels of 84 distinct inflammatory cytokines were assessed using a real‐time PCR array (Array #PAMM‐011; SABiosciences) focusing on the inflammatory pathway, following the manufacturer's guidelines. The RNA samples were converted to complementary DNA (cDNA) using the RT2 First Strand Kit (SABiosciences), and the cDNA samples were combined with RT2 SYBR Green PCR Master Mix (SABiosciences). The presence of inflammatory cytokines in the samples was determined through RT2 Profiler PCR arrays (SABiosciences), including cytokine genes IFNG, IL‐10, TGF‐β1 and TNF. The specificity of amplification for each sample was confirmed using dissociation curve analysis, and agarose gel electrophoresis validated the amplicon size. The relative expression levels of each gene were calculated using the Delta–Delta CT method, with GAPDH serving as the reference gene.
Excretion/secretion collection Lucilia sericata larvae that were incubated and kept under sterile conditions were used for MDT. Excretion/secretion (ES) products were derived from the third instar larvae. Twenty larvae were washed with 10 mL of sterile phosphate‐buffered saline after being stored at 37°C for 48 h, and the ES product was then collected. The protein concentrations were measured using a colorimetric protein assay in accordance with the manufacturer's guidelines (Bio‐Rad Laboratories).
Data analysis Before conducting the analysis, thorough screening of the data was conducted to guarantee accuracy and normality. Appropriate parametric statistical tests were applied to examine differences between groups. In cases where the assumptions of parametric tests were not met, alternative nonparametric statistical tests were employed. All analyses were carried out with a significance level set at p < 0.05.
RESULTS 3.1 Repair phase of the healing process indicated by alterations in haematoxylin and eosin staining, wound healing rate and the overall count of CD + 68 cells Granulation tissue was observed on the third day after MDT, as depicted in Figure . At each observation point, the wound margins of both groups underwent histological examination. Haematoxylin and eosin (H&E) staining revealed distinctive features. Images taken after MDT displayed an increase in infiltrating cells. To identify the presence of macrophages at the wound site, CD68 immunohistochemical (IHC) staining was employed. As shown in Figure , a significant difference in the number of CD68+ cells before and after MDT was observed. After MDT, there was a substantial increase in the number of CD68+ cells. In summary, these findings strongly suggest that the post‐MDT period is a crucial component of the inflammatory phase. 3.2 Expression of M2 polarization epitopes in ES ‐treated RAW264 .7 macrophage cell line To determine the phenotype of macrophages following MDT, RAW264.7 cells were grown and stimulated for 24 h with the ES product—the main active component of the maggots. To define the macrophage phenotype, the expression of CD68/inducible nitric oxide synthase (iNOS) or CD68/Arginase‐1 (Arg‐1), which are known markers of M1 and M2, respectively, was evaluated. Figure depicts the findings of co‐localization analysis indicating that Arg‐1 labeling was positive in macrophages. The control group exhibited no labeling when the primary antibodies were omitted from the second round of staining. 3.3 Downregulation of iNOS expression and upregulation of Arg‐1 levels in Raw267.4 cells treated with the ES product To measure the activity levels of M1 and M2 macrophages, we analysed the timing of iNOS and Arg‐1 expression in Raw267.4 cells, together with the evaluation of the ES product. Figure depicts a decrease in iNOS levels as a result of MDT treatment, which was determined by measuring Arg‐1 protein and mRNA levels. Following MDT, the wounds in diabetic patients display a decrease in iNOS expression and an increase in Arg‐1 levels. 3.4 M1 / M2 instructive cytokine levels shifted after MDT Given that IL‐10 and IL‐12, as well as IFN‐γ and TGF‐β, trigger the activation of M1 and M2 macrophages, respectively, our study utilized real‐time PCR to assess the expression patterns of these instructive cytokines in Raw267.4 cells when exposed to ES products. Remarkably, a significant reduction in Th1 cytokines was noted after the co‐culture process, as illustrated in Figure . This observation sheds light on the intricate dynamics of cytokine‐induced macrophage polarization and underscores the potential regulatory role of ES products in modulating macrophage responses. 3.5 Increased AMPK expression in macrophages after binding of Raw267.4 cells to ES The expression of AMPK in rae267.4 stimulated by ES was detected at the gene and protein levels, and the results showed that an increase in adenosine monophosphate‐activated protein kinase (AMPK) expression was found following a 24‐h culture and stimulation of Raw267.4 cells with ES products (Figure ). 3.6 Increased TLR ‐4 expression in macrophages after binding of Raw267.4 cell to ES The expression of TILR‐4 in rae267.4 stimulated by ES was detected by flow cytometry, and the results showed that after 24 h of culturing and stimulating Raw267.4 cells with ES products, an increase in TLR‐4 expression was detected (Figure ).
Repair phase of the healing process indicated by alterations in haematoxylin and eosin staining, wound healing rate and the overall count of CD + 68 cells Granulation tissue was observed on the third day after MDT, as depicted in Figure . At each observation point, the wound margins of both groups underwent histological examination. Haematoxylin and eosin (H&E) staining revealed distinctive features. Images taken after MDT displayed an increase in infiltrating cells. To identify the presence of macrophages at the wound site, CD68 immunohistochemical (IHC) staining was employed. As shown in Figure , a significant difference in the number of CD68+ cells before and after MDT was observed. After MDT, there was a substantial increase in the number of CD68+ cells. In summary, these findings strongly suggest that the post‐MDT period is a crucial component of the inflammatory phase.
Expression of M2 polarization epitopes in ES ‐treated RAW264 .7 macrophage cell line To determine the phenotype of macrophages following MDT, RAW264.7 cells were grown and stimulated for 24 h with the ES product—the main active component of the maggots. To define the macrophage phenotype, the expression of CD68/inducible nitric oxide synthase (iNOS) or CD68/Arginase‐1 (Arg‐1), which are known markers of M1 and M2, respectively, was evaluated. Figure depicts the findings of co‐localization analysis indicating that Arg‐1 labeling was positive in macrophages. The control group exhibited no labeling when the primary antibodies were omitted from the second round of staining.
Downregulation of iNOS expression and upregulation of Arg‐1 levels in Raw267.4 cells treated with the ES product To measure the activity levels of M1 and M2 macrophages, we analysed the timing of iNOS and Arg‐1 expression in Raw267.4 cells, together with the evaluation of the ES product. Figure depicts a decrease in iNOS levels as a result of MDT treatment, which was determined by measuring Arg‐1 protein and mRNA levels. Following MDT, the wounds in diabetic patients display a decrease in iNOS expression and an increase in Arg‐1 levels.
M1 / M2 instructive cytokine levels shifted after MDT Given that IL‐10 and IL‐12, as well as IFN‐γ and TGF‐β, trigger the activation of M1 and M2 macrophages, respectively, our study utilized real‐time PCR to assess the expression patterns of these instructive cytokines in Raw267.4 cells when exposed to ES products. Remarkably, a significant reduction in Th1 cytokines was noted after the co‐culture process, as illustrated in Figure . This observation sheds light on the intricate dynamics of cytokine‐induced macrophage polarization and underscores the potential regulatory role of ES products in modulating macrophage responses.
Increased AMPK expression in macrophages after binding of Raw267.4 cells to ES The expression of AMPK in rae267.4 stimulated by ES was detected at the gene and protein levels, and the results showed that an increase in adenosine monophosphate‐activated protein kinase (AMPK) expression was found following a 24‐h culture and stimulation of Raw267.4 cells with ES products (Figure ).
Increased TLR ‐4 expression in macrophages after binding of Raw267.4 cell to ES The expression of TILR‐4 in rae267.4 stimulated by ES was detected by flow cytometry, and the results showed that after 24 h of culturing and stimulating Raw267.4 cells with ES products, an increase in TLR‐4 expression was detected (Figure ).
DISCUSSION The activation of macrophages in vivo, particularly in lung and wound tissues, is gaining increasing attention. The features of macrophage activation have been examined extensively in vitro. Recent studies on human wounds have indicated that M1‐related genes are upregulated during the initial stages of wound healing. As the wound healing process advances, however, M2‐related genes become increasingly prominent. Consistent with previous research on the timing of activation‐related gene expression in human wounds, we used an immunofluorescence assay to demonstrate an increase in M2 macrophages during the proliferative stage of wound healing following MDT. The presence of a hyperglycaemic environment and glycosylation end products in diabetic wounds can result in functional impairment, failure of macrophage activation or a decrease in macrophage numbers. The number of M2 cells decreases significantly throughout the wound repair phase, which influences the healing process, according to studies. Second, the increase and persistent production of pro‐inflammatory factors, such as TNF‐α, in M1 cells can result in an excessively strong inflammatory response. The aforementioned processes create an M1‐M2 polarization imbalance within the diabetic wound environment, ultimately affecting the wound healing process. Consequently, the regulation of macrophage polarization phenotype during diabetic foot wound repair, as well as the effective control of the timing and intensity of macrophage polarization, may facilitate anti‐inflammatory repair and aid in the treatment of diabetic foot wounds. AMPK is a ubiquitous energy‐sensitive serine/threonine protein kinase found in eukaryotes. By modulating two key pathways, AMPK plays a crucial role in maintaining intracellular energy homeostasis. First, it promotes glycolysis and fatty acid oxidation by phosphorylating downstream target proteins. Second, it slows metabolic pathways that deplete energy, such as protein and lipid synthesis. Numerous studies have shown a close relationship between AMPK and the polarization state of macrophages during an inflammatory response. The stimulation of AMPK‐deficient macrophages with bacterial lipopolysaccharide (LPS) leads to the polarization of cells towards the M1 type and the production of IL‐6 and TNF‐α. However, subsequent activation of AMPK was found to specifically reverse the LPS‐induced expression of TNF‐α induced by LPS. In our study, we found that AMPK regulates the polarization of macrophages via activating Toll‐like receptors (TLRs) TLR‐4 and their downstream SOCS 1/3 molecules. TLRs are a type of pathogen‐associated molecular pattern recognition receptors that initiate signal transduction pathways by binding to pathogen‐associated molecular patterns (PAMPs). TLRs are essential for both natural and acquired immunity. TLR‐4 is responsible for recognizing a variety of stimuli, such as Gram‐negative bacteria, LPS, heat shock proteins (HSPs) and free fatty acids (FFA). In literature, the role of TLR‐4‐mediated signalling in macrophage activation and inflammatory responses has been widely established. TLR4 receptor inhibitors can greatly reduce the activation of M1. Based on the results of our study, following ES treatment, macrophages exhibited an M2 phenotype and expressed inflammatory inhibitory factors, AMPK levels were elevated, and TLR‐4 expression was upregulated. Our study aimed to investigate the impact of ES on the biological functions of macrophages, with a specific focus on understanding the intricate interactions among signalling molecules crucial for wound healing. The ultimate goal was to unravel the underlying mechanism behind MDT for DFU treatment. This hypothesis was formulated after a comprehensive analysis of existing literature and the outcomes of previous research conducted by our team (as depicted in Figure ). Our hypothesis posits that ES, through its effect on macrophages, activates AMPK, a key enzyme involved in cellular energy regulation. This activation, in turn, influences the polarization of macrophages via the TLR 4/SOCS 1/3 axis pathway. Consequently, this process enhances the expression of vascular endothelial growth factor (VEGF), a protein vital for promoting angiogenesis and wound healing. The increased VEGF levels, stimulated by the polarized macrophages, contribute significantly to the acceleration of tissue repair in ruptured wounds. This hypothesis, rooted in a synthesis of existing knowledge and our previous research findings, holds substantial promise in shedding light on the intricate biological processes involved in wound healing. By understanding the specific pathways and molecules influenced by ES, this study contributes not only to the scientific understanding of wound healing mechanisms but also paves the way for potential advancements in the treatment of DFU. The elucidation of these intricate interactions will likely have significant implications for the development of more effective therapeutic strategies, ultimately improving the quality of life for individuals suffering from diabetic complications. In summary, our study offers new perspectives on how diabetic foot injuries impact macrophage activation, enhancing our understanding of the vital role played by MDT in the wound healing process. Further research is necessary to fully grasp the mechanisms governing TH/TH2‐type cytokine differentiation in diabetes and the intricate involvement of polarized macrophage phenotypes in healing diabetic foot lesions. This exploration will undoubtedly advance our knowledge in diabetic wound healing, potentially leading to more effective therapeutic approaches for these critical medical conditions.
No external funding received to conduct this study.
The authors declare that they have no competing interests.
The study was conducted in accordance with the Declaration of Helsinki (as was revised in 2013). The study was approved by Ethics Committee of the Junxie Hospital (No.20180710–002). Written informed consent was obtained from all participants.
|
Simultaneous isolation and culture of endothelial colony-forming cells, endothelial cells and vascular smooth muscle cells from human umbilical cords | ebc7b421-3336-4315-8b12-89ba8f6bb343 | 11906538 | Musculoskeletal System[mh] | Fetal growth and development depend mainly on the functional integrity of the maternal-placental-fetal circulation. In humans, oxygen and nutrients are delivered to the fetus through a single umbilical vein (HUV), while deoxygenated blood containing metabolic waste is returned to the placenta via two umbilical arteries (HUAs). The regulation of human umbilical circulation under physiological and pathological conditions remains poorly understood. We previously demonstrated that intrauterine growth restriction (IUGR) is associated with sex-specific alterations in the human umbilical circulation . Our data suggested a differential contribution of subcellular compartmentation within vascular smooth muscle cells (VSMCs) depending on fetal sex, vessel type and the presence of IUGR . We therefore aimed to develop a protocol to isolate and culture vascular cells from umbilical cords of healthy or sick neonates to further investigate the relative contribution of each cell type to the human umbilical circulation regulation and assess the influence of biological sex. Indeed, there is growing evidence that this parameter, often neglected in clinical and basic research, plays a key role in physiological and pathophysiological processes, particularly in the cardiovascular system . VSMCs are the predominant cell type in the vascular wall, forming a multicellular layer, whose primary function is to regulate vascular tone. They can contract or relax in response to various stimuli, thereby modulating blood flow and blood pressure. Endothelial cells (ECs) constitute the monolayer lining the blood vessels’ lumen. They regulate vascular tone by interacting with VSMCs. Endothelial progenitor cells (EPCs) are circulating components of the endothelium, with vascular repair properties. EPCs can be distinguished according to their phenotype and functional properties : early EPCs appear early in culture, with no capacity to form vessels in vivo, whereas late EPCs or endothelial colony-forming cells (ECFCs) have important proliferative capacity and are able to form a vascular network in vitro and in vivo. Altered circulating EPCs amount and function have been observed in various cardiometabolic disorders . Exploring functional properties of ECs and VSMCs isolated from HUV and HUAs, and of ECFCs from cord blood would contribute to a better understanding of regulatory mechanisms implicated in the human umbilical circulation. Simultaneous isolation and culture of HUVECs, HUAECs, HUVSMCs, HUASMCs and ECFCs from the same patient will not only enable optimal use of each biological sample, but also direct comparison between the different cell types. During the development of our methodology, we favored the most time- and cost-effective approaches, with limited risk of cross-contamination, while providing satisfactory results when characterizing different cell types using biomarkers. The present report describes a simple methodology for cell isolation and characterization enabling reliable harvesting of these five vascular cell types from the same umbilical cord.
Umbilical cord collection The ethical approval was granted by the “Commission cantonale d'éthique de la recherche sur l'être humain (CER-VD)” (protocol number CER-VD-2022-01278). Umbilical cords were obtained from newborns delivered at the Maternity of the University Hospital CHUV in Lausanne, between July 2023 and February 2024. Inclusion criteria encompassed term pregnancies with singleton fetuses; exclusion criteria included neonates with a birth weight above the 90th percentile, fetal abnormalities, genetic syndromes, single HUA, mothers with HIV, hepatitis A, B, or C, and preeclampsia. A 10–15-cm segment of umbilical cord was collected as close as possible to the fetus soon after delivery and kept at 4 °C in phosphate-buffered saline (PBS, Gibco, 70013-016) until dissection. Umbilical cord blood was punctured from the HUV in a heparinized tube (S-Monovette® 9ML LH, Sarstedt, 02.1065) and kept at 4 °C until use. The cord blood was used within 12 h and the cord within 24 h after delivery. The methodology described below was validated using 14 umbilical cords to simultaneously isolate ECFCs, HUVECs, HUAECs, HUVSMCs and HUASMCs. Figure summarizes the main steps in the procedure for isolating vascular cells from umbilical cords. Cell culture Each cell type was cultured on plasticwares pre-coated with Type B bovine Gelatin 0.2% (Sigma, G1393-100ML). All culture media were supplemented with 100 µg/ml Primocin® (InvivoGen, ant-pm-05) to prevent contamination by fungi, bacteria and mycoplasma. Unless otherwise stated, culture media were changed twice a week. Cells were trypsinized (PAN-Biotech, PANP10-024100) and subcultured when confluency reached 80–95%. ECFCs and SMCs were frozen in Cryo-SFM (PromoCell, C-29910), and ECs in DMEM containing 20% FBS and 10% DMSO, at a rate of 1C°/min, and stored in liquid nitrogen for further use. ECFC isolation and culture ECFC isolation was directly inspired by a previously described method . Briefly, cord blood (2-7 ml) was overlaid onto the same volume of Ficoll (Histopaque-1077, Sigma, catalog number 10771) and centrifuged at 740 g for 30 min without brakes. The serum phase (top layer) was harvested and stored at − 80 °C for further investigation. Mononuclear cells were collected from the interphase (white disk) between serum and Ficoll phase. Cells were washed twice with PBS and centrifuged 10 min at 610 g and 430 g before being resuspended in Endothelial Cell Growth Medium MV2 (PromoCell, C-22022). Cells were then transferred to a 25-cm 2 flask and left for 3 days at 37 °C in a humidified and controlled atmosphere containing 5% CO 2 before the first medium change. ECFCs were subcultured when confluence reached 80–95% or if some cells in the center of the colonies started to show over-confluence symptoms. EC isolation and culture The umbilical cord was dissected on ice to isolate 4–6 cm segments from each vessel, which were cleaned from Wharton’s jelly as much as possible, longitudinally opened to expose the lumen and washed in four successive baths of sterile PBS to remove red blood cells and other contaminants. To isolate ECs, both HUAs and the HUV were separately incubated in 3 ml of 1 mg/ml collagenase/dispase (C/D) solution (ROCHE, 10269638001) for 30 min at 37 °C. The C/D solution containing ECs was harvested, and vessels were washed with 3 ml PBS to harvest remaining detached cells. Each 6-ml cell suspension was diluted to 10–15 ml with PBS and centrifuged 5 min at 110 g. Each pellet was resuspended in Endothelial Cell Medium 2 (PromoCell, C-22011) and transferred to a 25-cm 2 flask. To promote cell growth and development, the medium was supplemented with 10% heat-inactivated FBS (FBS Supreme, PAN-Biotech, P30-3031) until passage two (P2), after which FBS was removed due to its potential impact on sexual dimorphism of the cells and VSMCs phenotype . VSMC isolation and culture Vessels used for EC isolation were washed in PBS to discard remaining isolated ECs, cut into approximately 1-mm 2 pieces and transferred to a 24-well plate (8–10 pieces per well; 6 wells per vessel type). Pieces of both HUAs were evenly distributed together in 6 wells, while HUV explants were placed in a further 6 wells. To facilitate explant adsorption to the bottom of the wells, pieces were allowed to adhere but not to over-dry for approximatively 5–10 min at room temperature (RT°) before gently adding only 150 µl of medium M231 (Human Vascular Smooth Muscle Cell Basal Medium M231, Gibco, M231500) supplemented with Smooth Muscle Growth Supplement (SMGS, Gibco, S00725) and 20% heat-inactivated FBS. The next day, the culture medium volume was completed to 500 µl. Tissues were removed if they detached or after 4–10 days. Visual inspection during the first days helped exclude contaminated wells or wells containing cells with EC phenotype. Typically, 2–6 visually optimal wells were pooled together and further processed. Cells were trypsinized and subcultured in 25- or 75-cm 2 flasks depending on the quantity of harvested VSMCs. From P2, FBS was removed from the medium. VSMCs were subcultured when reaching 80–95% confluence or before over-confluence symptoms occurred in the densest areas. Characterization Morphology All cell types were visually inspected during early development and identified by their typical phenotype under phase contrast microscope. ECFCs and ECs have a cobblestone shape; VSMCs exhibit a spindle shape and a “hill-and-valley” pattern. In addition, ECFCs appear as colonies. Western blot (WB) Cell pellets for WB were obtained by trypsinization of three 75-cm 2 flasks at early passages (P2-P3) followed by a wash with PBS and a 5-min centrifugation at 110 g to discard any medium traces and stored at -80 °C until protein extraction. Pellets were resuspended in 450 µl lysis buffer {50 mM HEPES, 1 mM EDTA, 1 mM EGTA, 10% glycerol, 1 mM DTT, 5 μg/ml pepstatin, 3 μg/ml aprotinin, 10 μg/ml leupeptin, 0.1 mM 4-(2-aminoethyl)benzenesulfonyl fluoride hydrochloride (AEBSF), 1 mM sodium vanadate, 50 mM sodium fluoride, and 20 mM 3-[(3cholamidopropyl)dimethylammonio]-1-propanesulfonate (CHAPS)} and went through 3 freeze/thaw cycles in liquid nitrogen and 37 °C water bath. Lysates were centrifuged at 3000 g for 10 min at 4 °C; supernatant protein concentration was quantified using a BCA protein assay kit (Pierce, catalog number 23227) according to the manufacturer’s instructions. WB was performed as previously described using 60 µg of proteins per lane. The primary antibodies were directed against ERG (1:1000, Abcam, ab92513), endothelial nitric oxide synthase (eNOS) (1:200, Becton Dickinson, 610296), calponin 1 (Calp1) (1:5000, Abcam, ab46794), alpha-smooth muscle actin (SMA) (1:250, Sigma, A2547) and von Willebrand factor (VWF) (1:500, Cloud-Clone, PAA833Hu01). The secondary antibodies were IRDye 800 Donkey anti-mouse and IRDye 680 Donkey anti-rabbit (1:10,000, LI-COR Biosciences, 926-32212 and 926-68073). Visualization was done using an Odyssey Infrared Imaging System (LI-COR) and brightness was adjusted using ImageJ software. As neither ECs nor VSMCs have a fully dedicated marker, preliminary experiments were conducted to determine, based on the literature, which proteins would be most suitable as biomarkers for differentiating ECs from SMCs in this project (data not shown). Using WB, we thus selected proteins found in both native HUV and HUAs from male and female newborns, but detected only in endothelial-like cells (ECs and ECFCs) or SMCs, in order to allow exclusion of cross-contamination between ECs and SMCs in cell cultures derived from umbilical vessels. Immunocytofluorescence (ICF) 12,000 cells (P3-P4) per well were plated on a polymer 8-Chambers slide (μ-slide 8-well IbiTreat, Ibidi, 80826-IBI) coated with gelatin and were allowed to adhere overnight at 37 °C in a humidified and controlled atmosphere containing 5% CO 2 . Cells were then fixed according to the targeted protein as follows: cold methanol 1 min on ice for SMA and VWF; cold acetone 10 min on ice for eNOS and Calp1; paraformaldehyde 4% 10 min RT° for ERG. Additional wells with the same fixation conditions were used for unstained controls (without primary antibodies). After fixation, slides were dried out and kept frozen at − 20 °C until ECs and VSMCs were ready to be processed simultaneously. Following a 5-min permeabilization with Triton 0.25% at RT°, cells were blocked 20 min at RT° with goat serum 4%, incubated overnight at 4 °C with the same primary antibodies as for WB diluted in blocking solution (1:100), washed, and incubated with goat anti-rabbit Cy2 (1:1000, Abcam, ab6940) and anti-mouse Alexa 568 (1:100, Life Technologies A-11004) for 1 h at RT°. Finally, cells were washed and covered using a non-hardening mounting medium containing DAPI. Images were taken with a Zeiss inverted microscope using a 20 × objective and optimized for visualization using ImageJ. Polychromatic flow cytometry (PFC) ECFC profile was confirmed by PFC using conjugated antibodies against CD31 (PE, BioLegend, 303106), CD146 (APC, BioLegend, 361016) and CD45 (BV 421, BioLegend, 368522) according to and manufacturer’s instructions. Briefly, 300,000 cells (P2-P5) were split into 3 tubes: one with all three antibodies (total staining), one unstained control without antibodies, and one for viability test (Zombie NIR™ Fixable Viability Kit, BioLegend, 423105). Cells were incubated 20 min with the viability dye 1:500 at RT°, or with primary antibodies 1:40 on ice. After one wash for the viability test or two washes for the stained and unstained tubes, flow cytometry was performed on a Cytoflex S (V4-B2-Y4_r3, C09766) using CytExpert software (v2.3.1.22). Approximately 20,000 events were recorded in the FSC/SSC gated population during each read. The lasers (L) and filters (F) specifications of the Cytoflex S were as follows: PB450 (CD45) L:405 F:450/45, PE (CD31) L:561 F:585/42, APC (CD146) L:638 F:660/10, APC-A750 (Zombie NIR™) L:638 F:780/60. Analysis was performed using the free web-based interface available at https://floreada.io/ . First, cell population was gated using forward/side scatter (FSC/SSC) to exclude cellular debris; cell death was subsequently controlled to be less than 1% of the gated population with viability dye; finally, marker expression was measured on single parameter histogram and two-parameters density plot, with positive threshold based on unstained controls. No spectral overlap (Fig. S3) nor non-specific signal were detected, except for the viability dye which was run separately, so no compensation mechanisms were necessary. PFC experiment has been designed and optimized as detailed in Supplementary Information (Online Resource 1).
The ethical approval was granted by the “Commission cantonale d'éthique de la recherche sur l'être humain (CER-VD)” (protocol number CER-VD-2022-01278). Umbilical cords were obtained from newborns delivered at the Maternity of the University Hospital CHUV in Lausanne, between July 2023 and February 2024. Inclusion criteria encompassed term pregnancies with singleton fetuses; exclusion criteria included neonates with a birth weight above the 90th percentile, fetal abnormalities, genetic syndromes, single HUA, mothers with HIV, hepatitis A, B, or C, and preeclampsia. A 10–15-cm segment of umbilical cord was collected as close as possible to the fetus soon after delivery and kept at 4 °C in phosphate-buffered saline (PBS, Gibco, 70013-016) until dissection. Umbilical cord blood was punctured from the HUV in a heparinized tube (S-Monovette® 9ML LH, Sarstedt, 02.1065) and kept at 4 °C until use. The cord blood was used within 12 h and the cord within 24 h after delivery. The methodology described below was validated using 14 umbilical cords to simultaneously isolate ECFCs, HUVECs, HUAECs, HUVSMCs and HUASMCs. Figure summarizes the main steps in the procedure for isolating vascular cells from umbilical cords.
Each cell type was cultured on plasticwares pre-coated with Type B bovine Gelatin 0.2% (Sigma, G1393-100ML). All culture media were supplemented with 100 µg/ml Primocin® (InvivoGen, ant-pm-05) to prevent contamination by fungi, bacteria and mycoplasma. Unless otherwise stated, culture media were changed twice a week. Cells were trypsinized (PAN-Biotech, PANP10-024100) and subcultured when confluency reached 80–95%. ECFCs and SMCs were frozen in Cryo-SFM (PromoCell, C-29910), and ECs in DMEM containing 20% FBS and 10% DMSO, at a rate of 1C°/min, and stored in liquid nitrogen for further use. ECFC isolation and culture ECFC isolation was directly inspired by a previously described method . Briefly, cord blood (2-7 ml) was overlaid onto the same volume of Ficoll (Histopaque-1077, Sigma, catalog number 10771) and centrifuged at 740 g for 30 min without brakes. The serum phase (top layer) was harvested and stored at − 80 °C for further investigation. Mononuclear cells were collected from the interphase (white disk) between serum and Ficoll phase. Cells were washed twice with PBS and centrifuged 10 min at 610 g and 430 g before being resuspended in Endothelial Cell Growth Medium MV2 (PromoCell, C-22022). Cells were then transferred to a 25-cm 2 flask and left for 3 days at 37 °C in a humidified and controlled atmosphere containing 5% CO 2 before the first medium change. ECFCs were subcultured when confluence reached 80–95% or if some cells in the center of the colonies started to show over-confluence symptoms. EC isolation and culture The umbilical cord was dissected on ice to isolate 4–6 cm segments from each vessel, which were cleaned from Wharton’s jelly as much as possible, longitudinally opened to expose the lumen and washed in four successive baths of sterile PBS to remove red blood cells and other contaminants. To isolate ECs, both HUAs and the HUV were separately incubated in 3 ml of 1 mg/ml collagenase/dispase (C/D) solution (ROCHE, 10269638001) for 30 min at 37 °C. The C/D solution containing ECs was harvested, and vessels were washed with 3 ml PBS to harvest remaining detached cells. Each 6-ml cell suspension was diluted to 10–15 ml with PBS and centrifuged 5 min at 110 g. Each pellet was resuspended in Endothelial Cell Medium 2 (PromoCell, C-22011) and transferred to a 25-cm 2 flask. To promote cell growth and development, the medium was supplemented with 10% heat-inactivated FBS (FBS Supreme, PAN-Biotech, P30-3031) until passage two (P2), after which FBS was removed due to its potential impact on sexual dimorphism of the cells and VSMCs phenotype . VSMC isolation and culture Vessels used for EC isolation were washed in PBS to discard remaining isolated ECs, cut into approximately 1-mm 2 pieces and transferred to a 24-well plate (8–10 pieces per well; 6 wells per vessel type). Pieces of both HUAs were evenly distributed together in 6 wells, while HUV explants were placed in a further 6 wells. To facilitate explant adsorption to the bottom of the wells, pieces were allowed to adhere but not to over-dry for approximatively 5–10 min at room temperature (RT°) before gently adding only 150 µl of medium M231 (Human Vascular Smooth Muscle Cell Basal Medium M231, Gibco, M231500) supplemented with Smooth Muscle Growth Supplement (SMGS, Gibco, S00725) and 20% heat-inactivated FBS. The next day, the culture medium volume was completed to 500 µl. Tissues were removed if they detached or after 4–10 days. Visual inspection during the first days helped exclude contaminated wells or wells containing cells with EC phenotype. Typically, 2–6 visually optimal wells were pooled together and further processed. Cells were trypsinized and subcultured in 25- or 75-cm 2 flasks depending on the quantity of harvested VSMCs. From P2, FBS was removed from the medium. VSMCs were subcultured when reaching 80–95% confluence or before over-confluence symptoms occurred in the densest areas.
ECFC isolation was directly inspired by a previously described method . Briefly, cord blood (2-7 ml) was overlaid onto the same volume of Ficoll (Histopaque-1077, Sigma, catalog number 10771) and centrifuged at 740 g for 30 min without brakes. The serum phase (top layer) was harvested and stored at − 80 °C for further investigation. Mononuclear cells were collected from the interphase (white disk) between serum and Ficoll phase. Cells were washed twice with PBS and centrifuged 10 min at 610 g and 430 g before being resuspended in Endothelial Cell Growth Medium MV2 (PromoCell, C-22022). Cells were then transferred to a 25-cm 2 flask and left for 3 days at 37 °C in a humidified and controlled atmosphere containing 5% CO 2 before the first medium change. ECFCs were subcultured when confluence reached 80–95% or if some cells in the center of the colonies started to show over-confluence symptoms.
The umbilical cord was dissected on ice to isolate 4–6 cm segments from each vessel, which were cleaned from Wharton’s jelly as much as possible, longitudinally opened to expose the lumen and washed in four successive baths of sterile PBS to remove red blood cells and other contaminants. To isolate ECs, both HUAs and the HUV were separately incubated in 3 ml of 1 mg/ml collagenase/dispase (C/D) solution (ROCHE, 10269638001) for 30 min at 37 °C. The C/D solution containing ECs was harvested, and vessels were washed with 3 ml PBS to harvest remaining detached cells. Each 6-ml cell suspension was diluted to 10–15 ml with PBS and centrifuged 5 min at 110 g. Each pellet was resuspended in Endothelial Cell Medium 2 (PromoCell, C-22011) and transferred to a 25-cm 2 flask. To promote cell growth and development, the medium was supplemented with 10% heat-inactivated FBS (FBS Supreme, PAN-Biotech, P30-3031) until passage two (P2), after which FBS was removed due to its potential impact on sexual dimorphism of the cells and VSMCs phenotype .
Vessels used for EC isolation were washed in PBS to discard remaining isolated ECs, cut into approximately 1-mm 2 pieces and transferred to a 24-well plate (8–10 pieces per well; 6 wells per vessel type). Pieces of both HUAs were evenly distributed together in 6 wells, while HUV explants were placed in a further 6 wells. To facilitate explant adsorption to the bottom of the wells, pieces were allowed to adhere but not to over-dry for approximatively 5–10 min at room temperature (RT°) before gently adding only 150 µl of medium M231 (Human Vascular Smooth Muscle Cell Basal Medium M231, Gibco, M231500) supplemented with Smooth Muscle Growth Supplement (SMGS, Gibco, S00725) and 20% heat-inactivated FBS. The next day, the culture medium volume was completed to 500 µl. Tissues were removed if they detached or after 4–10 days. Visual inspection during the first days helped exclude contaminated wells or wells containing cells with EC phenotype. Typically, 2–6 visually optimal wells were pooled together and further processed. Cells were trypsinized and subcultured in 25- or 75-cm 2 flasks depending on the quantity of harvested VSMCs. From P2, FBS was removed from the medium. VSMCs were subcultured when reaching 80–95% confluence or before over-confluence symptoms occurred in the densest areas.
Morphology All cell types were visually inspected during early development and identified by their typical phenotype under phase contrast microscope. ECFCs and ECs have a cobblestone shape; VSMCs exhibit a spindle shape and a “hill-and-valley” pattern. In addition, ECFCs appear as colonies. Western blot (WB) Cell pellets for WB were obtained by trypsinization of three 75-cm 2 flasks at early passages (P2-P3) followed by a wash with PBS and a 5-min centrifugation at 110 g to discard any medium traces and stored at -80 °C until protein extraction. Pellets were resuspended in 450 µl lysis buffer {50 mM HEPES, 1 mM EDTA, 1 mM EGTA, 10% glycerol, 1 mM DTT, 5 μg/ml pepstatin, 3 μg/ml aprotinin, 10 μg/ml leupeptin, 0.1 mM 4-(2-aminoethyl)benzenesulfonyl fluoride hydrochloride (AEBSF), 1 mM sodium vanadate, 50 mM sodium fluoride, and 20 mM 3-[(3cholamidopropyl)dimethylammonio]-1-propanesulfonate (CHAPS)} and went through 3 freeze/thaw cycles in liquid nitrogen and 37 °C water bath. Lysates were centrifuged at 3000 g for 10 min at 4 °C; supernatant protein concentration was quantified using a BCA protein assay kit (Pierce, catalog number 23227) according to the manufacturer’s instructions. WB was performed as previously described using 60 µg of proteins per lane. The primary antibodies were directed against ERG (1:1000, Abcam, ab92513), endothelial nitric oxide synthase (eNOS) (1:200, Becton Dickinson, 610296), calponin 1 (Calp1) (1:5000, Abcam, ab46794), alpha-smooth muscle actin (SMA) (1:250, Sigma, A2547) and von Willebrand factor (VWF) (1:500, Cloud-Clone, PAA833Hu01). The secondary antibodies were IRDye 800 Donkey anti-mouse and IRDye 680 Donkey anti-rabbit (1:10,000, LI-COR Biosciences, 926-32212 and 926-68073). Visualization was done using an Odyssey Infrared Imaging System (LI-COR) and brightness was adjusted using ImageJ software. As neither ECs nor VSMCs have a fully dedicated marker, preliminary experiments were conducted to determine, based on the literature, which proteins would be most suitable as biomarkers for differentiating ECs from SMCs in this project (data not shown). Using WB, we thus selected proteins found in both native HUV and HUAs from male and female newborns, but detected only in endothelial-like cells (ECs and ECFCs) or SMCs, in order to allow exclusion of cross-contamination between ECs and SMCs in cell cultures derived from umbilical vessels. Immunocytofluorescence (ICF) 12,000 cells (P3-P4) per well were plated on a polymer 8-Chambers slide (μ-slide 8-well IbiTreat, Ibidi, 80826-IBI) coated with gelatin and were allowed to adhere overnight at 37 °C in a humidified and controlled atmosphere containing 5% CO 2 . Cells were then fixed according to the targeted protein as follows: cold methanol 1 min on ice for SMA and VWF; cold acetone 10 min on ice for eNOS and Calp1; paraformaldehyde 4% 10 min RT° for ERG. Additional wells with the same fixation conditions were used for unstained controls (without primary antibodies). After fixation, slides were dried out and kept frozen at − 20 °C until ECs and VSMCs were ready to be processed simultaneously. Following a 5-min permeabilization with Triton 0.25% at RT°, cells were blocked 20 min at RT° with goat serum 4%, incubated overnight at 4 °C with the same primary antibodies as for WB diluted in blocking solution (1:100), washed, and incubated with goat anti-rabbit Cy2 (1:1000, Abcam, ab6940) and anti-mouse Alexa 568 (1:100, Life Technologies A-11004) for 1 h at RT°. Finally, cells were washed and covered using a non-hardening mounting medium containing DAPI. Images were taken with a Zeiss inverted microscope using a 20 × objective and optimized for visualization using ImageJ. Polychromatic flow cytometry (PFC) ECFC profile was confirmed by PFC using conjugated antibodies against CD31 (PE, BioLegend, 303106), CD146 (APC, BioLegend, 361016) and CD45 (BV 421, BioLegend, 368522) according to and manufacturer’s instructions. Briefly, 300,000 cells (P2-P5) were split into 3 tubes: one with all three antibodies (total staining), one unstained control without antibodies, and one for viability test (Zombie NIR™ Fixable Viability Kit, BioLegend, 423105). Cells were incubated 20 min with the viability dye 1:500 at RT°, or with primary antibodies 1:40 on ice. After one wash for the viability test or two washes for the stained and unstained tubes, flow cytometry was performed on a Cytoflex S (V4-B2-Y4_r3, C09766) using CytExpert software (v2.3.1.22). Approximately 20,000 events were recorded in the FSC/SSC gated population during each read. The lasers (L) and filters (F) specifications of the Cytoflex S were as follows: PB450 (CD45) L:405 F:450/45, PE (CD31) L:561 F:585/42, APC (CD146) L:638 F:660/10, APC-A750 (Zombie NIR™) L:638 F:780/60. Analysis was performed using the free web-based interface available at https://floreada.io/ . First, cell population was gated using forward/side scatter (FSC/SSC) to exclude cellular debris; cell death was subsequently controlled to be less than 1% of the gated population with viability dye; finally, marker expression was measured on single parameter histogram and two-parameters density plot, with positive threshold based on unstained controls. No spectral overlap (Fig. S3) nor non-specific signal were detected, except for the viability dye which was run separately, so no compensation mechanisms were necessary. PFC experiment has been designed and optimized as detailed in Supplementary Information (Online Resource 1).
All cell types were visually inspected during early development and identified by their typical phenotype under phase contrast microscope. ECFCs and ECs have a cobblestone shape; VSMCs exhibit a spindle shape and a “hill-and-valley” pattern. In addition, ECFCs appear as colonies.
Cell pellets for WB were obtained by trypsinization of three 75-cm 2 flasks at early passages (P2-P3) followed by a wash with PBS and a 5-min centrifugation at 110 g to discard any medium traces and stored at -80 °C until protein extraction. Pellets were resuspended in 450 µl lysis buffer {50 mM HEPES, 1 mM EDTA, 1 mM EGTA, 10% glycerol, 1 mM DTT, 5 μg/ml pepstatin, 3 μg/ml aprotinin, 10 μg/ml leupeptin, 0.1 mM 4-(2-aminoethyl)benzenesulfonyl fluoride hydrochloride (AEBSF), 1 mM sodium vanadate, 50 mM sodium fluoride, and 20 mM 3-[(3cholamidopropyl)dimethylammonio]-1-propanesulfonate (CHAPS)} and went through 3 freeze/thaw cycles in liquid nitrogen and 37 °C water bath. Lysates were centrifuged at 3000 g for 10 min at 4 °C; supernatant protein concentration was quantified using a BCA protein assay kit (Pierce, catalog number 23227) according to the manufacturer’s instructions. WB was performed as previously described using 60 µg of proteins per lane. The primary antibodies were directed against ERG (1:1000, Abcam, ab92513), endothelial nitric oxide synthase (eNOS) (1:200, Becton Dickinson, 610296), calponin 1 (Calp1) (1:5000, Abcam, ab46794), alpha-smooth muscle actin (SMA) (1:250, Sigma, A2547) and von Willebrand factor (VWF) (1:500, Cloud-Clone, PAA833Hu01). The secondary antibodies were IRDye 800 Donkey anti-mouse and IRDye 680 Donkey anti-rabbit (1:10,000, LI-COR Biosciences, 926-32212 and 926-68073). Visualization was done using an Odyssey Infrared Imaging System (LI-COR) and brightness was adjusted using ImageJ software. As neither ECs nor VSMCs have a fully dedicated marker, preliminary experiments were conducted to determine, based on the literature, which proteins would be most suitable as biomarkers for differentiating ECs from SMCs in this project (data not shown). Using WB, we thus selected proteins found in both native HUV and HUAs from male and female newborns, but detected only in endothelial-like cells (ECs and ECFCs) or SMCs, in order to allow exclusion of cross-contamination between ECs and SMCs in cell cultures derived from umbilical vessels.
12,000 cells (P3-P4) per well were plated on a polymer 8-Chambers slide (μ-slide 8-well IbiTreat, Ibidi, 80826-IBI) coated with gelatin and were allowed to adhere overnight at 37 °C in a humidified and controlled atmosphere containing 5% CO 2 . Cells were then fixed according to the targeted protein as follows: cold methanol 1 min on ice for SMA and VWF; cold acetone 10 min on ice for eNOS and Calp1; paraformaldehyde 4% 10 min RT° for ERG. Additional wells with the same fixation conditions were used for unstained controls (without primary antibodies). After fixation, slides were dried out and kept frozen at − 20 °C until ECs and VSMCs were ready to be processed simultaneously. Following a 5-min permeabilization with Triton 0.25% at RT°, cells were blocked 20 min at RT° with goat serum 4%, incubated overnight at 4 °C with the same primary antibodies as for WB diluted in blocking solution (1:100), washed, and incubated with goat anti-rabbit Cy2 (1:1000, Abcam, ab6940) and anti-mouse Alexa 568 (1:100, Life Technologies A-11004) for 1 h at RT°. Finally, cells were washed and covered using a non-hardening mounting medium containing DAPI. Images were taken with a Zeiss inverted microscope using a 20 × objective and optimized for visualization using ImageJ.
ECFC profile was confirmed by PFC using conjugated antibodies against CD31 (PE, BioLegend, 303106), CD146 (APC, BioLegend, 361016) and CD45 (BV 421, BioLegend, 368522) according to and manufacturer’s instructions. Briefly, 300,000 cells (P2-P5) were split into 3 tubes: one with all three antibodies (total staining), one unstained control without antibodies, and one for viability test (Zombie NIR™ Fixable Viability Kit, BioLegend, 423105). Cells were incubated 20 min with the viability dye 1:500 at RT°, or with primary antibodies 1:40 on ice. After one wash for the viability test or two washes for the stained and unstained tubes, flow cytometry was performed on a Cytoflex S (V4-B2-Y4_r3, C09766) using CytExpert software (v2.3.1.22). Approximately 20,000 events were recorded in the FSC/SSC gated population during each read. The lasers (L) and filters (F) specifications of the Cytoflex S were as follows: PB450 (CD45) L:405 F:450/45, PE (CD31) L:561 F:585/42, APC (CD146) L:638 F:660/10, APC-A750 (Zombie NIR™) L:638 F:780/60. Analysis was performed using the free web-based interface available at https://floreada.io/ . First, cell population was gated using forward/side scatter (FSC/SSC) to exclude cellular debris; cell death was subsequently controlled to be less than 1% of the gated population with viability dye; finally, marker expression was measured on single parameter histogram and two-parameters density plot, with positive threshold based on unstained controls. No spectral overlap (Fig. S3) nor non-specific signal were detected, except for the viability dye which was run separately, so no compensation mechanisms were necessary. PFC experiment has been designed and optimized as detailed in Supplementary Information (Online Resource 1).
The profile of the biological samples used in this report and cell culture outcome for each sample are presented in Table . The success rate for each vascular cell type and the time required between cell isolation and first passage are summarized in Table . Vascular cell isolation and culture ECFCs The first colonies appeared after 10–30 days with a tightly packed cobblestone phenotype (Fig. a). We noticed that cell size varied with density and proliferation rate, but this did not affect characterization outcomes. ECFCs were subcultured approximately 26 days after isolation (Table ). ECFC culture had a success rate of 9/13 (69%) (Table ). All successful ECFC cultures met the characterization criteria using PFC and WB. ECs HUAECs (Fig. b) and HUVECs (Fig. c) were the easiest cell types to isolate and cultivate. Both exhibited a classic cobblestone phenotype and high proliferation rates. No morphological distinction was observed between HUAECs and HUVECs. As for ECFCs, we sometimes observed increased cell size when confluency and proliferation rates were lower than expected, but this did not affect characterization outcomes. HUAECs often provided better yields than HUVECs and were the first to be subcultured after approximately 11 days, compared to 15 days for HUVECs (Table ). No significant difference was found in the number of days between isolation and the first subculture (P1) for HUAECs and HUVECs (p = 0.0801, Wilcoxon matched-pairs signed rank test). 1/14 HUAEC culture was contaminated by VSMCs, which was easily detected by brightfield microscopy due to the distinct elongated shape of VSMCs compared to ECs, and was later confirmed by WB. Another one failed to meet the characterization criteria, and 1/14 HUVECs did not show ERG expression by WB, resulting in a success rate of 12/14 (86%) for HUAECs and 13/14 (93%) for HUVECs (Table ). SMCs HUASMCs and HUVSMCs required the most rigorous attention, mainly during the explant phase where it was crucial to exclude any wells that might contain ECs. HUASMCs began migrating from explants after about 1 week (Fig. d), and further grew as spindle-shaped cells with “hill-and-valley” pattern (Fig. e). HUVSMC migration was often more spread around and beneath the explant tissue. HUVSMCs appeared less elongated but were able to form nodules (Fig. f) and a “hill-and-valley” pattern. For each umbilical cord, HUV and HUA explants were distributed in 4–6 wells for each vessel type to isolate VSMCs. Only visually satisfying wells were further processed and subcultured. The proportion of wells selected for subculture was significantly lower for HUVSMCs (43/80, 54%) than HUASMCs (74/80, 93%) (p = 0.0029, Wilcoxon matched-paired signed rank test) (Table ). The number of days between explant culture and first passage was significantly greater for HUVSMCs (approximately 17 days) than HUASMCs (approximately 14 days) (Table ). Globally, 1/14 HUVSMC culture failed to start, while 1/14 HUASMC and 4/14 HUVSMC cultures were found, after characterization by WB, contaminated by ECs despite careful visual inspection and selection of the most promising wells. Therefore, the resulting success rates were 13/14 (93%) for HUASMCs and 9/14 (64%) for HUVSMCs (Table ). No bacterial or fungal contamination was observed in any cultures after Primocin® was introduced in our protocols. Characterization Cell culture characterization was confirmed by WB and ICF (Fig. , S1-S2). Successful EC and ECFC cultures were defined by the presence of eNOS, VWF and ERG, and absence of SMA and Calp1; successful VSMC cultures showed the opposite profile (Fig. ). A slight band below the Calp1 staining was attributed to a non-specific signal from the anti-ERG antibody previously used on the same membrane (Fig. f, S1). ECFCs isolated from 5 umbilical cord blood samples were characterized by PFC. Immunophenotyping revealed that over 99.9% of the cells were CD31 + /CD146 + , and 100% were CD45 – (Fig. , Table ). Detailed validation controls are presented in the Supplementary Information (Online Resource 1).
ECFCs The first colonies appeared after 10–30 days with a tightly packed cobblestone phenotype (Fig. a). We noticed that cell size varied with density and proliferation rate, but this did not affect characterization outcomes. ECFCs were subcultured approximately 26 days after isolation (Table ). ECFC culture had a success rate of 9/13 (69%) (Table ). All successful ECFC cultures met the characterization criteria using PFC and WB. ECs HUAECs (Fig. b) and HUVECs (Fig. c) were the easiest cell types to isolate and cultivate. Both exhibited a classic cobblestone phenotype and high proliferation rates. No morphological distinction was observed between HUAECs and HUVECs. As for ECFCs, we sometimes observed increased cell size when confluency and proliferation rates were lower than expected, but this did not affect characterization outcomes. HUAECs often provided better yields than HUVECs and were the first to be subcultured after approximately 11 days, compared to 15 days for HUVECs (Table ). No significant difference was found in the number of days between isolation and the first subculture (P1) for HUAECs and HUVECs (p = 0.0801, Wilcoxon matched-pairs signed rank test). 1/14 HUAEC culture was contaminated by VSMCs, which was easily detected by brightfield microscopy due to the distinct elongated shape of VSMCs compared to ECs, and was later confirmed by WB. Another one failed to meet the characterization criteria, and 1/14 HUVECs did not show ERG expression by WB, resulting in a success rate of 12/14 (86%) for HUAECs and 13/14 (93%) for HUVECs (Table ). SMCs HUASMCs and HUVSMCs required the most rigorous attention, mainly during the explant phase where it was crucial to exclude any wells that might contain ECs. HUASMCs began migrating from explants after about 1 week (Fig. d), and further grew as spindle-shaped cells with “hill-and-valley” pattern (Fig. e). HUVSMC migration was often more spread around and beneath the explant tissue. HUVSMCs appeared less elongated but were able to form nodules (Fig. f) and a “hill-and-valley” pattern. For each umbilical cord, HUV and HUA explants were distributed in 4–6 wells for each vessel type to isolate VSMCs. Only visually satisfying wells were further processed and subcultured. The proportion of wells selected for subculture was significantly lower for HUVSMCs (43/80, 54%) than HUASMCs (74/80, 93%) (p = 0.0029, Wilcoxon matched-paired signed rank test) (Table ). The number of days between explant culture and first passage was significantly greater for HUVSMCs (approximately 17 days) than HUASMCs (approximately 14 days) (Table ). Globally, 1/14 HUVSMC culture failed to start, while 1/14 HUASMC and 4/14 HUVSMC cultures were found, after characterization by WB, contaminated by ECs despite careful visual inspection and selection of the most promising wells. Therefore, the resulting success rates were 13/14 (93%) for HUASMCs and 9/14 (64%) for HUVSMCs (Table ). No bacterial or fungal contamination was observed in any cultures after Primocin® was introduced in our protocols.
The first colonies appeared after 10–30 days with a tightly packed cobblestone phenotype (Fig. a). We noticed that cell size varied with density and proliferation rate, but this did not affect characterization outcomes. ECFCs were subcultured approximately 26 days after isolation (Table ). ECFC culture had a success rate of 9/13 (69%) (Table ). All successful ECFC cultures met the characterization criteria using PFC and WB.
HUAECs (Fig. b) and HUVECs (Fig. c) were the easiest cell types to isolate and cultivate. Both exhibited a classic cobblestone phenotype and high proliferation rates. No morphological distinction was observed between HUAECs and HUVECs. As for ECFCs, we sometimes observed increased cell size when confluency and proliferation rates were lower than expected, but this did not affect characterization outcomes. HUAECs often provided better yields than HUVECs and were the first to be subcultured after approximately 11 days, compared to 15 days for HUVECs (Table ). No significant difference was found in the number of days between isolation and the first subculture (P1) for HUAECs and HUVECs (p = 0.0801, Wilcoxon matched-pairs signed rank test). 1/14 HUAEC culture was contaminated by VSMCs, which was easily detected by brightfield microscopy due to the distinct elongated shape of VSMCs compared to ECs, and was later confirmed by WB. Another one failed to meet the characterization criteria, and 1/14 HUVECs did not show ERG expression by WB, resulting in a success rate of 12/14 (86%) for HUAECs and 13/14 (93%) for HUVECs (Table ).
HUASMCs and HUVSMCs required the most rigorous attention, mainly during the explant phase where it was crucial to exclude any wells that might contain ECs. HUASMCs began migrating from explants after about 1 week (Fig. d), and further grew as spindle-shaped cells with “hill-and-valley” pattern (Fig. e). HUVSMC migration was often more spread around and beneath the explant tissue. HUVSMCs appeared less elongated but were able to form nodules (Fig. f) and a “hill-and-valley” pattern. For each umbilical cord, HUV and HUA explants were distributed in 4–6 wells for each vessel type to isolate VSMCs. Only visually satisfying wells were further processed and subcultured. The proportion of wells selected for subculture was significantly lower for HUVSMCs (43/80, 54%) than HUASMCs (74/80, 93%) (p = 0.0029, Wilcoxon matched-paired signed rank test) (Table ). The number of days between explant culture and first passage was significantly greater for HUVSMCs (approximately 17 days) than HUASMCs (approximately 14 days) (Table ). Globally, 1/14 HUVSMC culture failed to start, while 1/14 HUASMC and 4/14 HUVSMC cultures were found, after characterization by WB, contaminated by ECs despite careful visual inspection and selection of the most promising wells. Therefore, the resulting success rates were 13/14 (93%) for HUASMCs and 9/14 (64%) for HUVSMCs (Table ). No bacterial or fungal contamination was observed in any cultures after Primocin® was introduced in our protocols.
Cell culture characterization was confirmed by WB and ICF (Fig. , S1-S2). Successful EC and ECFC cultures were defined by the presence of eNOS, VWF and ERG, and absence of SMA and Calp1; successful VSMC cultures showed the opposite profile (Fig. ). A slight band below the Calp1 staining was attributed to a non-specific signal from the anti-ERG antibody previously used on the same membrane (Fig. f, S1). ECFCs isolated from 5 umbilical cord blood samples were characterized by PFC. Immunophenotyping revealed that over 99.9% of the cells were CD31 + /CD146 + , and 100% were CD45 – (Fig. , Table ). Detailed validation controls are presented in the Supplementary Information (Online Resource 1).
This study aimed to develop a simple, rapid and cost-effective method for simultaneous isolation and culture of ECFCs, HUVECs, HUAECs, HUVSMCs and HUASMCs from the umbilical cord of a single patient. First, cord blood is processed to harvest mononuclear cells using a Ficoll gradient, before plating and culture until obtaining ECFCs. Then, HUV and HUAs are carefully dissected, longitudinally opened to expose the lumen and incubated with collagenase/dispase to recover ECs. Finally, the vessels used to isolate ECs are washed and cut into small pieces to obtain VSMCs by migration from vascular explants. All cell types are then characterized by visual inspection, WB and ICF. ECFCs characterization is also confirmed by PFC analysis. Numerous publications describe how to isolate ECs or VSMCs from human umbilical cords. On the basis of several previous reports, notably that by Martin de Llano , we addressed various issues and optimized the procedure to reduce costs, contamination risks, and technical complexity, allowing an easy application for laboratories without requiring any specialized equipment or knowledge. The main step to simplify the procedure is to longitudinally open the vessels before enzymatic digestion to recover ECs, and to use explants for VSMCs. There is no need for a catheter, therefore reducing costs and handling, improving sterility and solving some issues like blood clots or enzymatic solution leakage. It is worth noting that Primocin® solved one of the major problems we faced during protocol optimization by completely preventing any contamination. Moreover, its anti-mycoplasma effect could be a valuable help given the common presence of genital mycoplasmas , their in-vitro persistence, and their potential impact on various experiments. ECs were the easiest cells to isolate and cultivate, achieving a success rate of about 90%. Contamination by VSMCs is unlikely to persist as these cells seem to require different supplements for proper growth and will either go through apoptosis or be outgrown by ECs. ECFC culture had a success rate of about 70%, consistent with previous reports indicating a failure to isolate ECFCs from about 25–30% of healthy donors . VSMCs were the cells requiring most attention, with higher success rate for HUASMCs (93%) compared to HUVSMCs (64%). This could be linked to the thinner muscular wall of HUV and the weaker explant attachment to the well bottoms compared to HUA. Moreover, differentiating HUVSMCs from ECs can be challenging due to their similar appearance, particularly before they form a distinct “hill-and-valley” pattern. It has already been described that subpopulations of VSMCs can even have a cobblestone appearance . However, as the proportion of visually satisfying wells selected for subculture was lower for HUVSMCs (55%) than HUASMCs (93%), increasing the initial number of wells containing HUV explants could improve the success rate for HUVSMCs. Based on our observations, twice as many wells should be prepared with explants of HUV as of HUA. Given the risk of cross-contamination, distributing vascular explants in several small wells (instead of putting them into a larger culture dish) helps to limit the risk of having an entire SMC culture contaminated with ECs, as wells that visually appear suspicious can be eliminated, so that only the most promising wells will be processed and subcultured. Cell culture characterization by WB, ICF and PFC yielded concordant outcomes, confirming the reliability of this protocol. Due to the various phenotypes and subsequent protein expression changes reported in the literature [ – , – ], neither ECs nor VSMCs have a fully dedicated marker. It is therefore recommended to combine multiple markers , selecting proteins unique to ECs or VSMCs to exclude cross-contaminations. Consequently, we selected several proteins that we found in both native umbilical vessels, but detected only in ECs (eNOS, VWF and ERG) or SMCs (SMA and Calp1), to allow detection of cross-contaminations. Other studies classically used SMA, smooth muscle-myosin heavy chain, Calp1, SM22-alpha or h-caldesmon for SMCs and CD31 or VWF for ECs [ , , , – ]. It would be interesting to investigate the expression of some molecules used to distinguish between cells isolated from arteries and veins, although this step is not necessary in the proposed procedure since we know from which vessel the cells derived. There is no fully dedicated marker for either vessel. Among those classically used, Eph B4 receptor and its ligand ephrin-B2 have been described as preferentially expressed in veins and arteries, respectively [ – ]. However, performing preliminary experiments to determine if they allow discrimination between cells isolated from HUV and HUAs in our project showed sex differences in native vessels: ephrin-B2 was significantly increased in HUA compared to HUV in males, but not in females (Online resource 2). These observations highlight the need to consider biological sex when establishing biomarkers. Moreover, differential expression of these arteriovenous biomarkers was found to vary between adult and umbilical vessels . As there is increasing evidence that Eph/ephrin family play key roles in cardiovascular development and disorders , the relative expression of these proteins in umbilical vessels from males and females will be investigated in both appropriate for gestational age (AGA) and growth-restricted newborns. Regarding the time required between cell isolation and first passage, it was similar for HUAECs and HUVECs, but longer for HUVSMCs than HUASMCs. Other comparisons are not relevant, as the quantity of cells initially cultured was not standardized. Cell proliferation will be studied later to further characterize umbilical vascular cells isolated from AGA and IUGR male and female newborns. Indeed, the development of this methodology for isolating vascular cells from umbilical cords is a steppingstone in our ongoing study of altered regulation of the human umbilical circulation in IUGR and the influence of fetal sex. In particular, we will compare, for each cell type, functional and molecular properties (including genetic profile) in cells isolated from AGA and IUGR male and female newborns. We will also compare, within each study group, HUVSMCs with HUASMCs, HUAECs with HUVECs, as well as ECFCs with differentiated HUAECs and HUVECs. In conclusion, this protocol provides a reliable and effective method for simultaneous isolation and culture of ECFCs, ECs and VSMCs from cord blood, HUV and HUAs collected from the same patient. This will enable optimal use of each biological sample and direct comparison between different cell types. This approach could facilitate the creation of a biobank containing cryopreserved ECFCs, HUVECs, HUAECs, HUVSMCs, HUAECs, as well as native HUV and HUAs, linked to clinical data, offering further research opportunities. Isolation and culture of vascular cells from umbilical cords of healthy or sick neonates would lead to a better understanding of human umbilical circulation under physiological or pathological conditions. In addition, considering the sex of the newborns from whom these cells originate will allow to assess the influence of biological sex, which should be considered a key factor in cardiovascular research and clinical management. This will contribute to the development of targeted therapeutic strategies in the future.
Below is the link to the electronic supplementary material. Supplementary file1 (PDF 5940 KB)
|
Gender representation in leadership & research: a 13-year review of the Annual Canadian Society of Otolaryngology Meetings | 87981e28-679f-442a-8265-5c9c1e05ae07 | 10173511 | Otolaryngology[mh] | Gender disparity in surgical disciplines, including Otolaryngology-Head and Neck Surgery (OHNS), has been highlighted in recent literature. Over the past 20 years, the proportion of female staff otolaryngologists and trainees has increased by 14.2% and 13.3% respectively, where 24.2% of staff otolaryngologists were female, and 41.9% of residents were female as of 2019 . Despite these advances, women lack proportionate representation in leadership positions in OHNS academic departments and specialty societies, though this may be improving among junior academic positions [ – ]. Termed “manels”, male-only speaking panels at major scientific conferences have been a recent focus in the literature. Women speakers were underrepresented across multiple medical and surgical specialty conferences, including in cross-sectional analyses of various American and Canadian society meetings [ – ]. In 2019, Nature Conferences and Springer Nature released a new code of conduct to formalize efforts to increase gender diversity, including no male-only organizing committees, no male-only panels, annual monitoring of progress, and sanctions when the code is not followed . Dr. Francis Collins, the National Institute of Health director, stated that women and other minorities were not equitably represented at major scientific conferences. He vowed to help “end the Manel tradition” by refusing to speak at a conference if attention to diversity was not given . Diversity in society meetings and panel-type presentations has multiple benefits. It has the potential to expand perspectives and several studies have shown that varied opinions may lead to better ideas, innovation, and an overall stronger panel . Women physicians have been shown to provide stellar patient care with excellent outcomes and have a place on these panels [ – ]. Increasing equitable representation of women and others helps perpetuate to attendees that individuals of all backgrounds are important members of the specialty society. Presentation at academic meetings and participation on scientific panels is also important for career advancement in academia. The presence of female representation helps decrease the “glass ceiling” effect noted for women in academia . Finally, this is an issue of justice and inclusivity . While there have been studies on gender diversity amongst speakers at key surgical conferences in the United States and Europe, there has not been published literature assessing this in our specialty in Canada. The Canadian Society of Otolaryngology-Head and Neck Surgery (CSO) is the major Otolaryngology society in Canada and encompasses all Otolaryngology subspecialties. Our aim was to determine the state of gender diversity amongst presenters and speakers at the annual CSO meetings.
Scientific programs for the CSO Annual Meetings were obtained from their website ( www.entcanada.org ) from 2008 to 2020 by two independent groups of researchers at two Canadian institutions. Extracted information for each position included: participant name, gender, role, and subspecialty topic (General OHNS, Education, Laryngology, Pediatric, Otology, Head and Neck Surgery, Facial Plastics and Reconstructive Surgery (FPRS), Endocrinology, and Rhinology). CSO annual newsletters were also accessed to extract the name and gender of CSO executive leadership. A binary definition of gender (male or female) was chosen as a surrogate of diversity in the study population, composed of specialists trained in Otolaryngology, Otolaryngology trainees, other medical specialists, allied health members, and medical students. Gender was determined using an online search of Google Scholar, departmental websites, and public descriptions. If gender could not be determined from online information, co-authors and fellow panelists were contacted to determine this information. Leadership CSO Executive Membership was extracted from CSO annual newsletters and included members of the executive council, executive committee, regional representatives, and special interest group leaders. Each of these was defined as a leadership “opportunity spot” and the number of unique women occupying these roles was quantified. To quantify the degree of diversity, each position was counted as one “opportunity spot” as per Barinsky et al. to capture those who participate in several different roles. Invited speaking opportunities An invited speaking opportunity spot was defined as any named role in the CSO program other than paper session or poster presenter (i.e. session moderator). Of the opportunity spots occupied by a woman, the absolute number of women included was also assessed. The following roles were included: CSO president, scientific program chair, local arrangements chair (if provided), guest(s) of honour, guest speakers and special presenters, award winners, workshop presenters and panelists, and paper session chair. Composition of panels The composition of panels was separately analyzed and divided into male-only panels, female-only panels, or those with at least one female participant. The CSO meetings labelled sessions led by one or a small group of experts as “mini-workshops”, “workshops”, “courses” or “panels”. Among workshops with multiple presenters, those with two or fewer presenters were named “workshop chairs”, and those with three or more presenters were called “panelists”. Those who were designated “workshop chairs” with a separate panel were named as “workshop chairs”. All named non-otolaryngologists (including other medical specialists, allied health specialists, researchers) and non-Canadian otolaryngologists were included in the count. Descriptive statistical analysis was performed using SAS Software (Version 9.4, SAS Institute Inc., Cary, NC, USA), and consisted of counts and percentages. The two data sets produced by the two independent groups were merged into a single file to cross-check. A senior author (EG) reviewed the file and flagged any inconsistencies in the data, which were then investigated and corrected based on publicly available information. The senior author identified 48 errors (approximately 4%), which were corrected. Gender differences were analyzed using chi-square tests and logistic regression with odds ratios (OR) and 95% confidence intervals (95%CI). An alpha level of 0.05 was used to determine statistical significance. This project did not require ethics oversight as per article 2.2. of the Tri-Council Policy Statement (TCPS)-2 guidelines regarding the use of publicly available data for research purposes.
CSO Executive Membership was extracted from CSO annual newsletters and included members of the executive council, executive committee, regional representatives, and special interest group leaders. Each of these was defined as a leadership “opportunity spot” and the number of unique women occupying these roles was quantified. To quantify the degree of diversity, each position was counted as one “opportunity spot” as per Barinsky et al. to capture those who participate in several different roles.
An invited speaking opportunity spot was defined as any named role in the CSO program other than paper session or poster presenter (i.e. session moderator). Of the opportunity spots occupied by a woman, the absolute number of women included was also assessed. The following roles were included: CSO president, scientific program chair, local arrangements chair (if provided), guest(s) of honour, guest speakers and special presenters, award winners, workshop presenters and panelists, and paper session chair.
The composition of panels was separately analyzed and divided into male-only panels, female-only panels, or those with at least one female participant. The CSO meetings labelled sessions led by one or a small group of experts as “mini-workshops”, “workshops”, “courses” or “panels”. Among workshops with multiple presenters, those with two or fewer presenters were named “workshop chairs”, and those with three or more presenters were called “panelists”. Those who were designated “workshop chairs” with a separate panel were named as “workshop chairs”. All named non-otolaryngologists (including other medical specialists, allied health specialists, researchers) and non-Canadian otolaryngologists were included in the count. Descriptive statistical analysis was performed using SAS Software (Version 9.4, SAS Institute Inc., Cary, NC, USA), and consisted of counts and percentages. The two data sets produced by the two independent groups were merged into a single file to cross-check. A senior author (EG) reviewed the file and flagged any inconsistencies in the data, which were then investigated and corrected based on publicly available information. The senior author identified 48 errors (approximately 4%), which were corrected. Gender differences were analyzed using chi-square tests and logistic regression with odds ratios (OR) and 95% confidence intervals (95%CI). An alpha level of 0.05 was used to determine statistical significance. This project did not require ethics oversight as per article 2.2. of the Tri-Council Policy Statement (TCPS)-2 guidelines regarding the use of publicly available data for research purposes.
A total of 1874 opportunity spots were available during the annual CSO meetings from 2008 to 2020, of which 348 (18.6%) were filled by women (Table ). These were held by 92 unique women in total. There was an overall increase in the number and proportion of these positions held by women (Fig. ), from six leadership spots in 2008 (6.7%) to a peak of 50 spots in 2020 (23.7%). Leadership Among all CSO executive members, there were 448 men (83.0%) and 92 women (17.0%) over the studied period, encompassing 342 unique men and 63 unique females. The gender breakdown by position type, along with the number of unique individuals occupying these positions, is shown in Fig. . There was a significant difference between male and female representation in society executive members (p = 0.0009). Figure shows the change in gender representation in executive positions over the period studied, with the trend in unique individuals occupying these positions. Notably, there has been one female CSO president, and no female scientific program chairs during the period studied. Among the Guests of Honour, of which there are usually one or two per meeting, there has been only one female otolaryngologist chosen across all meetings. The CSO Awards Committee Chair, who also serves as chair of the annual Poliquin competition for resident research, has been a male surgeon until 2019 and 2020, when a female surgeon was elected to this role. From 2011 and 2014 onwards, various awards were given for lifetime achievement, recognition by Canadian region, and fellowship awards. Of the thirty-one awards, seven (22.6%) were awarded to women across all years studied. Invited speaking opportunities Overall, there were 1,136 invited speaking opportunities at CSO meetings between 2008 and 2020. Of these, 97 were part of workshops and 1,039 from panels. Females only represented 18.6% (18) of invited speakers at workshops and 18.6% (193) at panels. Across each CSO year, female representation in panels steadily increased until 2015, and then has remained constant at around 20 to 25% (Fig. ). There appears to be no discernible trend in female representation in workshops over the same period. The Scientific Program Committee consists of the CSO president, Scientific Program Chair, and Continuing Professional Development (CPD) Committee Chair. In the period studied, this committee included one woman (of 3–4 members) in 2008, 2013, and from 2015–2020. The larger Scientific Program Reviewer Committee consisted of 20–25 members representing all OHNS subspecialties, who reviewed blinded abstracts for selection of workshop/panel presenters, oral session presenters, and poster presenters. Data was available for 2018–2020 only, and there were seven female members in 2018, seven in 2019, and five in 2020. Composition of panels A total of 368 workshops (including workshops, mini-workshops, panels, courses, and CPD Corner sessions) were identified. There were 225 (61.1%) male-only panels (“manels”), while 9 (2.5%) were led by women only, and 134 (36.4%) workshops included at least one female surgeon (Fig. ). Chi-square analysis showed a significant difference between the proportion of male-only panels and those including any women (p = 0.0001). The CSO meeting in 2015 was the first year that there was a greater proportion of panels including at least one woman than those with exclusively male panelists (55.8% mixed panels), and this trend has continued for four out of six subsequent years. Female leadership was significantly underrepresented in many subspecialties (Table ). Conversely, laryngology and general OHNS workshops consistently had more female representation, even from 2008. There were only five instances, between 2008 and 2020, where females made up the majority of representatives in their discipline’s sessions compared to their male counterparts.
Among all CSO executive members, there were 448 men (83.0%) and 92 women (17.0%) over the studied period, encompassing 342 unique men and 63 unique females. The gender breakdown by position type, along with the number of unique individuals occupying these positions, is shown in Fig. . There was a significant difference between male and female representation in society executive members (p = 0.0009). Figure shows the change in gender representation in executive positions over the period studied, with the trend in unique individuals occupying these positions. Notably, there has been one female CSO president, and no female scientific program chairs during the period studied. Among the Guests of Honour, of which there are usually one or two per meeting, there has been only one female otolaryngologist chosen across all meetings. The CSO Awards Committee Chair, who also serves as chair of the annual Poliquin competition for resident research, has been a male surgeon until 2019 and 2020, when a female surgeon was elected to this role. From 2011 and 2014 onwards, various awards were given for lifetime achievement, recognition by Canadian region, and fellowship awards. Of the thirty-one awards, seven (22.6%) were awarded to women across all years studied.
Overall, there were 1,136 invited speaking opportunities at CSO meetings between 2008 and 2020. Of these, 97 were part of workshops and 1,039 from panels. Females only represented 18.6% (18) of invited speakers at workshops and 18.6% (193) at panels. Across each CSO year, female representation in panels steadily increased until 2015, and then has remained constant at around 20 to 25% (Fig. ). There appears to be no discernible trend in female representation in workshops over the same period. The Scientific Program Committee consists of the CSO president, Scientific Program Chair, and Continuing Professional Development (CPD) Committee Chair. In the period studied, this committee included one woman (of 3–4 members) in 2008, 2013, and from 2015–2020. The larger Scientific Program Reviewer Committee consisted of 20–25 members representing all OHNS subspecialties, who reviewed blinded abstracts for selection of workshop/panel presenters, oral session presenters, and poster presenters. Data was available for 2018–2020 only, and there were seven female members in 2018, seven in 2019, and five in 2020.
A total of 368 workshops (including workshops, mini-workshops, panels, courses, and CPD Corner sessions) were identified. There were 225 (61.1%) male-only panels (“manels”), while 9 (2.5%) were led by women only, and 134 (36.4%) workshops included at least one female surgeon (Fig. ). Chi-square analysis showed a significant difference between the proportion of male-only panels and those including any women (p = 0.0001). The CSO meeting in 2015 was the first year that there was a greater proportion of panels including at least one woman than those with exclusively male panelists (55.8% mixed panels), and this trend has continued for four out of six subsequent years. Female leadership was significantly underrepresented in many subspecialties (Table ). Conversely, laryngology and general OHNS workshops consistently had more female representation, even from 2008. There were only five instances, between 2008 and 2020, where females made up the majority of representatives in their discipline’s sessions compared to their male counterparts.
Our data demonstrated that female surgeons held nearly a quarter of the total speaking positions at the CSO meetings from 2008 to 2020. The most common roles held were paper session chairs and panelists (a workshop led by three or more specialists). The proportion of male-only panels and workshops (“manels”) did decrease over time, but constituted over half of all workshops in 2020. Our results align with the current literature highlighting the differential representation of women in academic conferences, particularly in medicine and in surgical subspecialties [ – , , ]. Barinsky et al. were the first and only group to publish on the gender disparity of OHNS conference speakers in the US. They showed an increase in opportunity spots occupied by women from 11.5% in 2003 to 29.5% in 2019, but that the number of unique women occupying these spots was only 24.4% of the total . Women were more likely to be oral session moderators or panelists instead of speakers, executive board members, or honoured guests. This was mirrored in our results in the Canadian population. It is promising that we have seen a trend toward increasing female representation over the past 12 years, especially amongst workshop chairs and panelists. Part of this may be attributed to an increase in the number of female otolaryngologists in Canada, from only 10% in 2000 to 24% in 2019 , which does approximate the proportion of female speakers in those years. An increase in female representation in leadership positions was seen starting in 2014. We hypothesize that there may be several contributing factors—the opening of more opportunities to present workshops, a critical mass of female staff and trainees moving through the pipeline, and the development of a formal Women in Otolaryngology section of the CSO. “Mini-workshops” and “How I Do It” workshops were first introduced at CSO meetings in 2014, though they were not always present in subsequent years. The increase in opportunities, particularly of smaller workshops, may be a way of increasing opportunities for participation from more junior staff, a pool of specialists more likely to include women . Our results also showed increased female representation in broader subspecialties starting in 2014. The proportion of Canadian and American women pursuing academic fellowships in surgical specialties has increased over the past several decades . From 2011 to 2020, the number of Canadian female otolaryngologists who have completed subspecialty fellowships has increased. Still, the gender gap was largest in head and neck surgery, rhinology, and otology, where only 28%, 29%, and 22% were female, respectively. Pediatric OHNS and laryngology were the only two fellowships with a female predominance. However, the absolute number of female graduates of otology, rhinology, and facial plastic surgery ranged from 5–10 over 2011 to 2020, whereas the numbers of female graduates of head and neck surgery and pediatric OHNS were similar at 15–20, and more than 30 new general OHNS practitioners were female . This correlates with our findings that there was less female representation in facial plastics and rhinology workshops. Increasing mentorship opportunities and visibility of women and minorities can lead to increased participation in academic activities by junior staff and trainees [ , , ]. The CSO Women in Otolaryngology (WIO) group was established in 2014, and coincides with the increased female presence at the annual meeting. The WIO hosts networking sessions with female staff and trainees from across the country and offers opportunities for society leadership, mentorship, and creates a sense of community. This may be critical for incoming and junior trainees navigating transitions and seeking career advancement opportunities. Deliberate initiatives such as this will continue to raise awareness of gender disparities in our specialty, and encourage females to pursue academic aspirations, an essential first step toward increasing representation. In 2019, 41.9% of OHNS trainees were female, and 45.3% of OHNS CaRMS applicants were female, indicating that future generations may see greater gender parity. We expect to see a similar trend in our speakers and conference leadership as more women become involved in academic endeavours. Literature shows that despite increasing proportions of female trainees and surgeons, women are still underrepresented in OHNS leadership and senior academic roles (such as assistant, associate, and full professor) compared to men [ , – ], and when compared to all specialties in medicine . However, a lag effect may be contributing to this phenomenon, in that it will take several years for the newly admitted trainees to eventually progress through their careers to leadership positions. To close the gender disparity amongst conference speakers and presenters, there must be continued efforts to close the gender gap among trainees entering the specialty, increase support for women to pursue research and academia , and develop initiatives to recruit and retain female faculty . Studies from Arora et al., Lu et al., Gerull et al., and Zaza et al. assessed the proportion of female speakers at an aggregate of over a hundred academic medical and surgical conferences across multiple specialties. They examined the correlation between the proportion of women on conference planning committees and female speakers [ , , , ]. There was a statistically significant positive correlation between the proportion of women on planning committees and society leadership and the proportion of female speakers, based on univariable analysis and still significant after controlling for regional gender balance of the specialty. For our study, the scientific planning committee information was only fully available from 2018 onwards, and while it would have been interesting to support this literature with our study, this analysis was not possible in a meaningful way. Increasing the proportion of women on conference planning committees may be a simple yet effective way to reduce the gender disparity amongst speakers [ , , , , ]. Our conference has a blinded selection process, with workshop chairs and presenters submitting blinded abstracts to be selected by the scientific planning committee. The gender disparity in workshops may not be related to gendered selection bias, but rather the number of women conducting research and their research productivity. While a 2013 study reported that women in their early career produce less research output, but at senior levels, they equal or exceed the research productivity of men , a more recent report from 2020 indicates that female otolaryngologists are maintaining research productivity in their early careers (less than 15 years into practice) to keep closer pace with men. However, women continued to lag behind men in research productivity in some subspecialties such as head and neck oncology, laryngology, and pediatrics . There are likely numerous contributing factors affecting research productivity, but the evolution of societal gender roles with more equal sharing of domestic duties and child care, greater financial and administrative support for research, and increasing mentorship opportunities will have a positive impact [ , , , ]. This study is only one component in achieving greater equity and diversity: raising awareness of disparities. Moving forward, we must consider systems-level change to improve gender parity , ]. It is critical to further assess the factors impacting speaker invitations for conferences, and women’s submissions for these opportunities. These may include personal and professional barriers, the proportion of women in the specialty, research productivity, visibility as a leader in the field, gender bias, and gender composition of the conference planning committee , , , , ]. Regular reassessment of female representation at these conferences is a crucial checkpoint ]. Ongoing analysis of equity at national society and departmental levels may be facilitated by designated diversity and inclusion leads or committees, and including these stakeholders in conference and departmental planning . With the higher proportion of women amongst younger otolaryngologists and trainees, continuing to improve the gender gap will result in a larger pool from which to select our conference leadership and presenters. Study limitations The results of the study must be interpreted within the confines of the research methodology. This study is limited in that it is a retrospective review of various publicly available databases, and thus the authors were unable to confirm the accuracy or validity of this data. Data around the proportion of abstracts submitted by female presenters versus the proportion accepted for presentation was not available. We also used a binary definition of biological sex as a surrogate for gender identity, which exists on a spectrum, and the biological sex of presenters was recorded based on public information and/or confirmation by colleagues. Lastly, the present study did not capture the many other diversity factors in the workforce.
The results of the study must be interpreted within the confines of the research methodology. This study is limited in that it is a retrospective review of various publicly available databases, and thus the authors were unable to confirm the accuracy or validity of this data. Data around the proportion of abstracts submitted by female presenters versus the proportion accepted for presentation was not available. We also used a binary definition of biological sex as a surrogate for gender identity, which exists on a spectrum, and the biological sex of presenters was recorded based on public information and/or confirmation by colleagues. Lastly, the present study did not capture the many other diversity factors in the workforce.
The proportion of women in speaking roles at the annual Canadian Society of Otolaryngology-Head and Neck Surgery meetings has generally increased with time, particularly among panelists. This has led to a decrease in male-only speaking panels and workshops. However, there has been a slower growth rate of unique women in leadership speaker roles. There is still room for increasing gender diversity at the major Canadian OHNS meeting. Academic mentorship, equitable allocation of opportunities and resources, and equal encouragement of research endeavours for both men and women may help contribute to this.
|
mRNA and miRNA Expression Analyses of the | 93970f06-cabf-4f5b-91dc-9c2756bf861a | 7827072 | Pediatrics[mh] | miRNA molecules are involved in the post-transcriptional regulation of gene expression, and changes in their activity are associated with development of cancer by modulating oncogenic and/or tumor suppressor pathways. Moreover, miRNAs are still being studied as useful biomarkers, promising a valuable diagnostic tool useful in defining the prognosis and helpful in identifying the targeted therapy strategies . One of the most recognized miRNA families is the miR-17-92 cluster (OncomiR-1), whose particular members demonstrate oncogenic functions influencing cell proliferation, apoptosis, and neoplastic angiogenesis . OncomiR-1 contains six miRNAs: miR-17, miR-18, miR-19a, miR-20, miR-19b, and miR-92 derived from a common pri-mRNA localized in the MIR17HG / C13orf25 human gene located on chromosome 13. The cluster has two paralogs: the miR-106b-25 and the miR-106a-363, which comprise miR-106b, miR-93, and miR-25 in the MCM7 gene on chromosome 7 and miR-106a, miR-18b, miR-19b-2, miR-20b, miR-92a-2, and miR-363 on the X chromosome, respectively . The most common element of these three clusters is their origin. miR-17-92 and its paralogs were probably created through tandem genetic duplication of individual cluster members, followed by duplication of entire clusters and subsequent loss of individual miRNAs . This hypothesis was confirmed by the possibility of grouping specific miRNAs on the basis of sequence homology into four miRNA families: miR-17 family, (miR-17-5p, miR-20a, miR-20b, miR-106a, miR-106b, miR-93), miR-18 family (miR-18a, miR-18b), miR-19 family (miR-19a, miR-19b-1, miR-19b-2), and miR-92 family (miR-92a-1, miR-92a-2, miR-25, miR-363) . Experimental studies have shown that there is a negative feedback loop among members of the miR-17-92 cluster and E2F and MYC transcription factors . Many mathematical models of the MYC / E2F /miR-17-92 network were created, estimating how overexpression of the miR-17-92 cluster affects different types of cancers . The consecutive studies provide evidences that the overexpression of miR-17-92 members is involved in the development of many solid tumors, including lung , breast , colon , hepatocellular , and stomach cancer . Their essential role in adipocyte differentiation , lung development , angiogenesis , tumorigenesis , and heart development was also underlined.
2.1. MYC and E2F Gene Expression Is Connected with Tumor Type and Grade Analysis of expression levels of genes from the MYC family showed that MYCN was characterized by the highest activity . The highest level was confirmed for the medulloblastomas (ddCt = 3.67), which have the highest grade of malignancy among the examined tumors. Differences in expression levels between the analyzed groups showed statistical significance between medulloblastoma (MB) and pilocytic astrocytoma (PA) ( p = 0.0125, ). The expression of the MYCC gene was at a lower level than MYCN . The highest values of MYCC were confirmed in the ependymomas (ddCt = 1.49), followed by medulloblastomas, while the lowest expression was showed in the pilocytic astrocytomas, however, no statistically significant differences were found between the groups. For the MYCL gene we obtained the highest expression in the medulloblastomas (ddCt = 2.15), followed by pilocytic astrocytomas. Downregulation was noted in the ependymomas (ddCt = −0.68). The following differences in the expression between the groups p = 0.0116,74 (MB vs. PA), p = 0.000109 (MB vs. ependymoma (EP)) and p = 0.0440,45 (EP vs. PA) were statistically significant. Gene expression analysis among the members of the E2F family showed very high up-expression for the E2F2 gene. The highest level was found in the medulloblastomas (ddCt = 6.62), followed by EPs and PAs. The differences between the groups for the E2F2 gene, p < 0.000, was noted between MB and PA and MB and EP groups. E2F1 expression reached a significance level between MBs and PAs ( p < 0.000), between MBs and EPs ( p = 0.0135) and between EPs and PAs ( p = 0.014). The E2F3 gene obtained the lowest level of expression where ddCt = 1.33 for MBs, ddCt = 0.34 for EPs, ddCt = 0.39 for PAs. Statistically significant differences were noted between MB and EP, where p = 0.011. In conclusion, the MYCN , E2F1 , E2F2 genes showed the highest expression levels among the studied groups. The highest ddCt values of these genes were noticed in medulloblastomas, followed by ependymomas and pilocytic astrocytomas, indicating a positive correlation between expression level and tumor grade. 2.2. miRNA Expression Depends on Tumor’s Histopathology and WHO Grade Almost all tested miRNAs from miR-17-95, miR-106b-25, and miR-106a-363 clusters showed overexpression in the analyzed cohort of pediatric brain tumors. It has been observed that there is a correlation between miRNA expression and the tumor’s grade. The highest miRNA expression was noted for medulloblastomas, next highest for ependymomas, and finally pilocytic astrocytomas. The highest expression level was confirmed for miR-18a (miR-17-92 cluster) and miR-18b (miR-106a-363 cluster). Double delta Ct of miR-18a was 3.28 for MBs, 2.04 for Eps, and 1.66 for Pas, accordingly. ddCt values noted for miR-18b were 3.54 for MB, 2.19 for Eps, and 1.85 for PAs. It should also be emphasized that miR-17-5p and miR-20-5p achieved very low expression values in pilocytic astrocytomas, ddCt was 0.08 for miR-17-5p, while for miR-20a-5p expression was below the internal control level (−0.11). The one exception was noted for miR-363, for which down-expression in the medulloblastomas (ddCt = −0.70) was showed, while in the ependymomas (ddCt = 1.47) and pilocytic astrocytomas (ddCt = 1.48) the expression was on a higher level. Comparison of miRNA expression levels from the miR-17-92, miR-106b-25, and miR-106a-363 clusters in the three tumor groups showed that the increase of miRNA expression was dependent on WHO grade and type ( and ). Statistically significant differences in miRNA expression occurred between MBs and PAs, next between EPs and PAs, while the smallest differences were noted between MBs and EPs . 2.3. Relationship between Gene Expression of Genes from MYC and E2F Families and miRNAs The analysis of correlation between genes from the MYC and E2F families showed a positive Pearson correlation coefficient in pilocytic astrocytomas, eight out of nine gene–gene pairs achieved statistical significance with r values ranging from 0.55 to 0.81. In the ependymoma group 6 out of nine gene–gene pairs reached the level of statistical significance, r value was in the range of 0.54 to 0.66. In medulloblastomas, only three pairs obtained statistical significance, r value from 0.38 to 0.51. There was no strong correlation between genes and miRNA expression (the correlation coefficient ranged from −0.61 to 0.38). Among the statistically significant results, the most interesting observations concerned the miR-106b-25 cluster. In ependymomas, gene expression negatively correlated with the expression of cluster members, e.g., miR-106b- MYCC r = −0.42, miR-106b- MYCN r = −0.61, miR-106b- E2F2 r = −0.51, miR-106b- E2F3 r = −0.58, miR-93- MYCN r = −0.47, miR-25- MYCN r = −0.47, miR-25- E2F2 r = −0.49, miR-25- E2F3 r = −0.52. The remaining statistically significant results include miR-363- E2F3 r = −0.42 in EPs, miR-92a- MYCC r = 0.38 in MBs, miR-92a- E2F1 r = −0.40 in MBs, and miR-18b- E2F1 r = −0.37 in MBs. No statistically significant results were obtained in pilocytic astrocytomas. Strong correlations between miRNAs were reported due to the common origin of the miRNAs. The strongest correlation was found in medulloblastomas and ependymomas, while a lower value of the correlation coefficient was noted for pilocytic astrocytomas. Pearson correlation analyses were performed with a 95% confidence interval.
Analysis of expression levels of genes from the MYC family showed that MYCN was characterized by the highest activity . The highest level was confirmed for the medulloblastomas (ddCt = 3.67), which have the highest grade of malignancy among the examined tumors. Differences in expression levels between the analyzed groups showed statistical significance between medulloblastoma (MB) and pilocytic astrocytoma (PA) ( p = 0.0125, ). The expression of the MYCC gene was at a lower level than MYCN . The highest values of MYCC were confirmed in the ependymomas (ddCt = 1.49), followed by medulloblastomas, while the lowest expression was showed in the pilocytic astrocytomas, however, no statistically significant differences were found between the groups. For the MYCL gene we obtained the highest expression in the medulloblastomas (ddCt = 2.15), followed by pilocytic astrocytomas. Downregulation was noted in the ependymomas (ddCt = −0.68). The following differences in the expression between the groups p = 0.0116,74 (MB vs. PA), p = 0.000109 (MB vs. ependymoma (EP)) and p = 0.0440,45 (EP vs. PA) were statistically significant. Gene expression analysis among the members of the E2F family showed very high up-expression for the E2F2 gene. The highest level was found in the medulloblastomas (ddCt = 6.62), followed by EPs and PAs. The differences between the groups for the E2F2 gene, p < 0.000, was noted between MB and PA and MB and EP groups. E2F1 expression reached a significance level between MBs and PAs ( p < 0.000), between MBs and EPs ( p = 0.0135) and between EPs and PAs ( p = 0.014). The E2F3 gene obtained the lowest level of expression where ddCt = 1.33 for MBs, ddCt = 0.34 for EPs, ddCt = 0.39 for PAs. Statistically significant differences were noted between MB and EP, where p = 0.011. In conclusion, the MYCN , E2F1 , E2F2 genes showed the highest expression levels among the studied groups. The highest ddCt values of these genes were noticed in medulloblastomas, followed by ependymomas and pilocytic astrocytomas, indicating a positive correlation between expression level and tumor grade.
Almost all tested miRNAs from miR-17-95, miR-106b-25, and miR-106a-363 clusters showed overexpression in the analyzed cohort of pediatric brain tumors. It has been observed that there is a correlation between miRNA expression and the tumor’s grade. The highest miRNA expression was noted for medulloblastomas, next highest for ependymomas, and finally pilocytic astrocytomas. The highest expression level was confirmed for miR-18a (miR-17-92 cluster) and miR-18b (miR-106a-363 cluster). Double delta Ct of miR-18a was 3.28 for MBs, 2.04 for Eps, and 1.66 for Pas, accordingly. ddCt values noted for miR-18b were 3.54 for MB, 2.19 for Eps, and 1.85 for PAs. It should also be emphasized that miR-17-5p and miR-20-5p achieved very low expression values in pilocytic astrocytomas, ddCt was 0.08 for miR-17-5p, while for miR-20a-5p expression was below the internal control level (−0.11). The one exception was noted for miR-363, for which down-expression in the medulloblastomas (ddCt = −0.70) was showed, while in the ependymomas (ddCt = 1.47) and pilocytic astrocytomas (ddCt = 1.48) the expression was on a higher level. Comparison of miRNA expression levels from the miR-17-92, miR-106b-25, and miR-106a-363 clusters in the three tumor groups showed that the increase of miRNA expression was dependent on WHO grade and type ( and ). Statistically significant differences in miRNA expression occurred between MBs and PAs, next between EPs and PAs, while the smallest differences were noted between MBs and EPs .
The analysis of correlation between genes from the MYC and E2F families showed a positive Pearson correlation coefficient in pilocytic astrocytomas, eight out of nine gene–gene pairs achieved statistical significance with r values ranging from 0.55 to 0.81. In the ependymoma group 6 out of nine gene–gene pairs reached the level of statistical significance, r value was in the range of 0.54 to 0.66. In medulloblastomas, only three pairs obtained statistical significance, r value from 0.38 to 0.51. There was no strong correlation between genes and miRNA expression (the correlation coefficient ranged from −0.61 to 0.38). Among the statistically significant results, the most interesting observations concerned the miR-106b-25 cluster. In ependymomas, gene expression negatively correlated with the expression of cluster members, e.g., miR-106b- MYCC r = −0.42, miR-106b- MYCN r = −0.61, miR-106b- E2F2 r = −0.51, miR-106b- E2F3 r = −0.58, miR-93- MYCN r = −0.47, miR-25- MYCN r = −0.47, miR-25- E2F2 r = −0.49, miR-25- E2F3 r = −0.52. The remaining statistically significant results include miR-363- E2F3 r = −0.42 in EPs, miR-92a- MYCC r = 0.38 in MBs, miR-92a- E2F1 r = −0.40 in MBs, and miR-18b- E2F1 r = −0.37 in MBs. No statistically significant results were obtained in pilocytic astrocytomas. Strong correlations between miRNAs were reported due to the common origin of the miRNAs. The strongest correlation was found in medulloblastomas and ependymomas, while a lower value of the correlation coefficient was noted for pilocytic astrocytomas. Pearson correlation analyses were performed with a 95% confidence interval.
The most important role of miRNA is advanced interactions between particular miRNA groups and genes with crucial cellular functions. One example of such interactions is the feedback loop between MYC / E2F and miR-17-92. Expression of the E2F1 gene is caused by MYC and also MYC expression is induced by E2F1, forming the positive feedback loop . The expression level of E2F and MYC transcription factors determines further cell activity, including transcription of the members of the miR-17-92 cluster. In addition, MYCC and MYCN can initiate transcription by direct binding to the miR-17-92 promoter . E2F1 expression is negatively regulated by two miRNAs from the cluster, miR-17-5p and miR-20a . Additionally miR-20a modulates E2F2 and E2F3 translation . Dysregulated expression of E2F and MYC families and miR-17-92 cluster are often found in the most types of cancers including lung , breast and prostate tumors or leukemia . Here we present the results of the expression analysis performed for genes from the MYC ( MYCC , MYCN , MYCL ) and E2F ( E2F1 , E2F2 , E2F3 ) families and miRNAs from miR-17-92, miR-106b-25 and miR-106a-363 clusters in three types of pediatric brain tumors showing different histology and grade: medulloblastoma (WHO grade 4), ependymoma (WHO grade 2) and pilocytic astrocytoma (WHO grade 1). The E2F family consists of E2F1, E2F2, and E2F3 transcription factors, with defined activating function, and E2F4-8, with confirmed inhibiting functions. The first subgroup of E2F proteins are involved in the regulation of the cell cycle and have sufficient transcriptional activity to drive quiescent cells from G1 to S phase . MYC family consists of three paralogs MYCC, MYCN, MYCL, which are characterized as known oncogenic factors . High levels of E2F1 were described as factors associated with a cell cycle dysregulation. If E2F1 expression is low, the mammalian cell will remain at rest in the G1 phase. In turn, high expression of E2F1 leads to increased cell proliferation that may result in tumor formation and progression . Our study confirmed that E2F1 mRNA levels were correlated with tumor grade and were increased in high grade lesions. Differences between the three analyzed groups revealed a statistically significant level of gene expression with p < 0.001 between medulloblastoma and pilocytic astrocytoma and p < 0.05 between medulloblastoma and ependymoma, and ependymoma and pilocytic astrocytoma. Upregulation of E2F1 has been reported in studies performed both on in vitro and in vivo brain tumor models, which described a significant increase in E2F1 expression levels and activity . Oliver’s team in research performed on a mouse model of medulloblastoma showed that MYCN promotes cell cycle gene expression; increase of MYCN expression level was there reported with significantly increased levels of E2F1 (3.7-fold), and E2F2 (6.1-fold) . Such observations are consistent with our results. Here we showed the highest expression level of E2F2 , MYCN and E2F1 in each of the studied groups of tumors ( A). Swartling and colleagues in their study performed on a medulloblastoma mouse model showed that MYCN contributes to tumor initiation and progression. Tumor maintenance requires constant MYCN expression, while inhibition of its expression leads to aging of tumor cells . Increase of MYCN expression has been reported in cancer with an aggressive course and poor prognosis, particularly that of neural origin, and also in neuroendocrine tumors including medulloblastoma , while there is not much evidence linking MYCN to glial-derived tumors . In our study, among the three types of tumors examined, the highest expression of MYCN was found in medulloblastomas, which confirms the observations made in the previous reports . MYCN is a recognized biomarker in neuroblastomas, but little is known about the expression of MYCN in less aggressive brain tumors . According to our best knowledge only one report relates to several cases of anaplastic ependymoma (WHO grade 3) . Here we report that MYCN showed the highest level of expression in three study groups among the MYC family. Our work is based on the analysis of pediatric infratentorial brain tumors, and a high level of MYCN expression can be associated with the development of the cerebellum . Additionally MYCC and MYCN proteins are mostly functionally interchangeable . It is very possible that MYCN takes over the MYCC function. In our study, the MYCC mRNA level in all groups occurs at a similar level (without statistically significant differences between the groups), while MYCN is distinguished by a very high level of expression. High expression of MYCN in all groups studied suggests a dependency not on the grade of the tumor but even more on its location. This is consistent with the research presented by Korshunov et al. on pediatric infratentorial glioblastomas with high MYCN expression . It was shown that MYCC and MYCN can bind to the promoter of miR-17-92 and initiate transcription . Overexpression of miR-92, miR-106a, miR-17-5p, and miR-93 were associated with MYCN amplification , and in addition, E2F1 expression is negatively regulated by two miRNAs from the cluster, miR-17-5p and miR-20a . Therefore we decided to examine the levels of expression of members of the miR-17-92 group and its two paralogs, miR-106a-363 and miR-106b-25. All components of the mir-17-92, mir-106a-363, and miR-106b-25 clusters showed overexpression relative to the control. Two molecules, miR-17-5p and miR-20a, were the most frequently studied and reported miRNAs of these clusters also in brain tumors . Studies concerning the expression level of miRNAs in human gliomas have shown that expression of miR-17 and miR-20a were significantly higher than in control tissues. The molecules promoted proliferation and invasion and inhibited apoptosis in glioma cells and thus contributed to increasing malignancy of the tumors . Our results showed that miR-17-5p and miR-20a achieved similar levels of expression relative to each other in the individual groups studied; when expression in the medulloblastoma group was high, in ependymoma the level was slightly lower, while in pilocytic astrocytoma these miRNAs reached very low overexpression level compared to the control. This was consistent with the literature reports that the level of expression positively correlates with the malignancy of the tumor . Moreover miR-20a and miR-17-5p regulated E2F1 by binding in the 3′-UTR of its mRNA . In our research miR-17-5p and miR-20a did not reach the highest expression among the miRNAs tested, nor did E2F1 from the E2F family of genes. This may indicate the involvement of these factors in mutual regulation. Yang et al. also showed that E2F1 is a direct target of miR-106a, and the level of miR-106a expression inversely correlates with the tumor grade . In our study we showed that miR-106a expression correlates with the WHO grade, the highest expression level was confirmed in medulloblastoma and the lowest in pilocytic astrocytoma. Such results have also been reported for tumors of glial origin . The highest level of expression in our study was noted for miR-18a and miR-18b. miR-18a is highly expressed in many types of cancer and cell lines, enhancing the tumorigenesis, malignancy, and metastatic potential . In addition, miR-18a plasma concentration was significantly higher in preoperative samples than in postoperative samples in gastrointestinal cancers . High expression of miR-18a has also been shown in glioblastoma tissue samples and cell lines. The increasing level of that miRNA was associated with cell proliferation and progression . Similar results were obtained for miR-18b. miR-18b is one of the most significantly upregulated miRNAs in colorectal cancer, where miR-18b expression promoted cell proliferation, facilitating cell cycle progression . In breast cancer, overexpression of miR-18b was noted in both clinical samples and cell lines, and upregulated miR-18b increased cell migration . A relationship between miR-18b expression and grade of malignancy was demonstrated; in addition, an increase in miR-18b expression contributed to poor prognosis . Our results demonstrate that miR-18a and miR-18b showed the highest level of expression among all miRNA molecules tested. However, until now there has been no other research on such a large scale covering all elements of the miR-17-92, miR-106b-25, and miR106a-363 clusters, especially in brain tumors in children, therefore it is difficult to answer whether this is a feature unique to this type of lesion. One thing is certain, that miR-18a and miR-18b are found to be high in pediatric brain tumors and that the decrease in expression is associated with lower grade. miR-363 was the sole miRNA tested by us whose expression level was close to the level of control in medulloblastomas, whereas it was high in ependymomas and pilocytic astrocytomas. The literature concerning this issue is quite limited and concerns mainly glial tumors. Conti et al. in their study conducted on pilocytic astrocytomas (WHO grade 1), diffuse fibrillary astrocytomas (WHO grade 2), anaplastic astrocytomas (WHO grade 3), and glioblastomas (WHO grade 4) showed that miR-363 was upregulated in all the tumors and its level positively correlated with the grade of tested samples . Here we showed that expression of miR-363 was higher in tumors of glial and ependymal rather than embryonal origin, and according to that our results could be confirmation for the observations presented by Conti et al. The positive Pearson correlation coefficient between the expression of genes from the MYC and E2F families was observed in the group of pilocytic astrocytomas, then in ependymomas , whereas in the medulloblastomas the least pairs gene–gene achieved the results on the level of statistical significance. Correlation analysis of miRNA-gene pairs expression showed no strong interactions. Only for a few miRNA-gene pairs were the results statistically significant. Among them inverse correlations ( r > −0.61) were noted in the ependymoma group for miR-106b- MYCC , miR-106b- MYCN , miR-106b- E2F2 , miR-106b- E2F3 , miR-93- MYCN , miR-25- MYCN , miR-25- E2F2 , miR-25- E2F3 , i.e., members of the miR-106b-25 cluster. The MYC and E2F gene families are involved in the regulation of the cell cycle and their expression levels could be disturbed during carcinogenesis. Moreover, members of the miR-17-92 cluster participate in the regulation of these genes . An important feature of miRNA biology is that a single miRNA may be compatible with multiple regions of mRNA, thus regulating entire networks of proteins. Conversely, one mRNA can be targeted by several miRNAs . It should be emphasized that tumorigenesis is a cascade of events, dysregulation of individual genes and miRNA-gene interactions, activation of signaling pathways, which are influenced by multiple factors including tumor type, location, stage, as well as age of the patient . Thus, a gene can be regulated by many miRNAs ( MYC is predicted to be targeted by 48 miRNAs, according to the database mirdb.org ) and it can influence the regulation of other miRNAs by becoming part of a feedback loop. Some of the best characterized feedback loops involving MYC are MYC / PTEN /miR-106b, miR-93, miR-25, miR-19a, miR-22, miR-26a, miR-193b, miR-23b; MYC / RB1 /miR-106a, miR-106b, and miR-17; MYC / VEGF /miR-106b, miR-106a, miR-93, miR-34a, miR-20a, miR-17, miR-16, miR-15a . To summarize, we confirmed the largest statistically significant differences of miRNA expression between medulloblastomas and pilocytic astrocytomas, followed by ependymomas and pilocytic astrocytomas, while the smallest differences were noted between MBs and EPs. However, we expected the smallest differences between the least malignant tumors of common glial origin, i.e., PAs and EPs. The result may indicate that the levels of miRNA expression depend not only on the grade, but also on tumor type. Our current research, which focused on the evaluation of miRNA expression from three clusters and related genes from MYC and E2F families in pediatric brain tumors, demonstrated that expression levels of members of the miR-17-92 cluster and its paralogs are upregulated in the analyzed cohort of cases and levels of their expression correlate with the WHO grade and histology. Members of the E2F family were overexpressed in all samples and the highest expression levels were confirmed for E2F2 . Among the genes from the MYC family, the highest expression was observed for MYCN and it was also correlated with the WHO grade and type. Such observation indicates the plausible therapeutic potential of miRNAs as critical targets in brain tumor therapy despite tumor type in the future.
4.1. Patients and Tissue Samples In the analysis 90 samples of childhood brain tumors stabilized in RNA later and stored at −80 °C were included. Brain tumors comprised 30 pilocytic astrocytomas (WHO grade 1), 30 infratentorial ependymomas (WHO grade 2), and 30 medulloblastomas (WHO grade 4). All analyzed tumors were located infratentorially. The age of the patients ranged from 0 to 18 years. Control material constituted Human Brain Total RNA (Invitrogen, cat. No AM7962). The experiments were approved by the Bioethical Committee at the Medical University of Lodz (permit No: RNN/122/17/KE). 4.2. RNA Isolation and Reverse Transcription Total RNA, including a fraction of small non-coding RNAs, was extracted according to the manufacturer’s instructions, using commercially available miRNeasy Mini Kit (Qiagen, Hilden, Germany). The quantity and purity of RNA were analyzed quantitatively and qualitative. 4.3. Reverse Transcription and Quantification of Gene Expression by qRT-PCR cDNA dedicated for gene expression analysis was synthetized from 500 ng of total RNA of each sample by 5x HiFlex Buffer (miScript II RT Kit, Qiagen). The real-time quantitative PCR analysis was performed in duplicate using Fast Advanced Master Mix and specific TaqMan probes (Life Technologies, Carlsbad, CA, USA) for MYCC (Hs00153408_m1), MYCN (Hs00232074_m1), MYCL (Hs00420495_m1), E2F1 (Hs00153451_m1), E2F2 (Hs00231667_m1), E2F3 (Hs00605457_m1) genes and GAPDH used as the control housekeeping gene (Hs99999905_m1). Normalized relative expression levels of the examined gene were calculated in the tested samples compared with control based on the sample’s average Ct value, according the formula in Equation (1): ddCt = dCt(target sample) − dCt(control sample) = (Ctref tar -Ctgene tar ) − (CTref cont -Ctgene cont ). (1) 4.4. Reverse Transcription and Detection of miRNA Expression by qRT-PCR To conduct miRNA expression analysis 750 ng of total RNA was reverse transcribed using a TaqMan MicroRNA Reverse Transcription kit and specific RT primers from TaqMan MicroRNA assays (Life Technologies, USA) for hsa-miR-17 (assay ID: 0023081), hsa-miR-20a (0005801), hsa-miR-106b (0004421), hsa-miR-93 (0010901), hsa-miR-20b (0010141), hsa-miR-106a (0021691), hsa-miR-18a (0024221), hsa-miR-18b (0022171), hsa-miR-19a (0003951), hsa-miR-19b (0003961), hsa-miR-92a (0004311), hsa-miR-25 (0004031), and hsa-miR-363 (0012711). Two sequences, U6 snRNA (0019731) and hsa-miR-9 (000583) were used as the internal control. miRNA expression was performed using dedicated TaqMan probes, PCR primer set from TaqMan MicroRNA and Fast Advanced Master Mix (Life Technologies, USA). Reactions for each assay were performed in duplicate, the results were averaged to analyses. CFX96™ Touch Real-Time PCR Detection System was used for acquisition (Bio-Rad, Hercules, CA, USA). Normalized relative expression levels of the miRNA in the tested samples vs. the control sample were calculated based on the mean Ct value of the sample, according to the formula in Equation (2): ddCt = dCt(target sample) − dCt(a control sample) = (Ctref tar -CtmiRNA tar ) − (CTref cont -CTmiRNA cont ). (2) 4.5. Statistical Analysis Statistica (v. 13.0) software was used for the statistical analysis of research results. Normality was checked using the Shapiro–Wilk test and the Lilliefors-corrected Kolmogorov–Smirnov. Comparisons of the different miRNA and gene expression levels between groups were performed using ANOVA coupled with Tukey’s post hoc test or the Kruskal–Wallis test, depending on the type of distribution. Spearman’s rank correlation was used to assess the correlations between miRNAs and gene expression.
In the analysis 90 samples of childhood brain tumors stabilized in RNA later and stored at −80 °C were included. Brain tumors comprised 30 pilocytic astrocytomas (WHO grade 1), 30 infratentorial ependymomas (WHO grade 2), and 30 medulloblastomas (WHO grade 4). All analyzed tumors were located infratentorially. The age of the patients ranged from 0 to 18 years. Control material constituted Human Brain Total RNA (Invitrogen, cat. No AM7962). The experiments were approved by the Bioethical Committee at the Medical University of Lodz (permit No: RNN/122/17/KE).
Total RNA, including a fraction of small non-coding RNAs, was extracted according to the manufacturer’s instructions, using commercially available miRNeasy Mini Kit (Qiagen, Hilden, Germany). The quantity and purity of RNA were analyzed quantitatively and qualitative.
cDNA dedicated for gene expression analysis was synthetized from 500 ng of total RNA of each sample by 5x HiFlex Buffer (miScript II RT Kit, Qiagen). The real-time quantitative PCR analysis was performed in duplicate using Fast Advanced Master Mix and specific TaqMan probes (Life Technologies, Carlsbad, CA, USA) for MYCC (Hs00153408_m1), MYCN (Hs00232074_m1), MYCL (Hs00420495_m1), E2F1 (Hs00153451_m1), E2F2 (Hs00231667_m1), E2F3 (Hs00605457_m1) genes and GAPDH used as the control housekeeping gene (Hs99999905_m1). Normalized relative expression levels of the examined gene were calculated in the tested samples compared with control based on the sample’s average Ct value, according the formula in Equation (1): ddCt = dCt(target sample) − dCt(control sample) = (Ctref tar -Ctgene tar ) − (CTref cont -Ctgene cont ). (1)
To conduct miRNA expression analysis 750 ng of total RNA was reverse transcribed using a TaqMan MicroRNA Reverse Transcription kit and specific RT primers from TaqMan MicroRNA assays (Life Technologies, USA) for hsa-miR-17 (assay ID: 0023081), hsa-miR-20a (0005801), hsa-miR-106b (0004421), hsa-miR-93 (0010901), hsa-miR-20b (0010141), hsa-miR-106a (0021691), hsa-miR-18a (0024221), hsa-miR-18b (0022171), hsa-miR-19a (0003951), hsa-miR-19b (0003961), hsa-miR-92a (0004311), hsa-miR-25 (0004031), and hsa-miR-363 (0012711). Two sequences, U6 snRNA (0019731) and hsa-miR-9 (000583) were used as the internal control. miRNA expression was performed using dedicated TaqMan probes, PCR primer set from TaqMan MicroRNA and Fast Advanced Master Mix (Life Technologies, USA). Reactions for each assay were performed in duplicate, the results were averaged to analyses. CFX96™ Touch Real-Time PCR Detection System was used for acquisition (Bio-Rad, Hercules, CA, USA). Normalized relative expression levels of the miRNA in the tested samples vs. the control sample were calculated based on the mean Ct value of the sample, according to the formula in Equation (2): ddCt = dCt(target sample) − dCt(a control sample) = (Ctref tar -CtmiRNA tar ) − (CTref cont -CTmiRNA cont ). (2)
Statistica (v. 13.0) software was used for the statistical analysis of research results. Normality was checked using the Shapiro–Wilk test and the Lilliefors-corrected Kolmogorov–Smirnov. Comparisons of the different miRNA and gene expression levels between groups were performed using ANOVA coupled with Tukey’s post hoc test or the Kruskal–Wallis test, depending on the type of distribution. Spearman’s rank correlation was used to assess the correlations between miRNAs and gene expression.
|
A commemoration of the “digital” side of Juan Rosai: a junior’s perspective of the legacy of an all-round pathologist | 1e78bdf5-d09e-4bee-9a13-21d278340b1f | 8720403 | Pathology[mh] | When Bethany and Filippo asked me to write down a few lines in commemoration of Juan Rosai, underlining his role in the field of digital pathology, the first thought that arose in my mind was ‘Who is or who was Juan Rosai’? As a first-year pathology resident, Rosai’s name was especially familiar because of the surgical pathology book bearing his name that I have seen in regular use in different institutions. But apart from the fame of this work, I found myself knowing very little concerning his career and the professional achievements that must have led to his writing of an almost singled-authored masterpiece. To confront my question, I turned to the internet. After doing some digging, the information I found started depicting Rosai as an authentic all-round professional. At a first and superficial sight, he was described as a firm defender of conventional H&E slides and a convinced believer in morphology. He apparently played a role not only as a diagnostician but also as a researcher, a consultant and a teacher, distinguishing himself as a real icon at all levels of modern pathology. Furthermore, among his multifaceted contributions to the discipline, he was also overtly acclaimed as a master of surgical pathology, to the point of receiving several flattering nicknames, such as “the Maradona of surgical pathology”. However, going on in my personal quest for Rosai’s true identity, I realized that he was not only focused on H&E and microscopy, but he was also a polyhedric pathologist: he was an innovative promoter of emerging technologies, such as immunohistochemistry, molecular biology and digital pathology. He foresaw their importance and future applications before they proved to be mainstays in the field of modern pathology, and correctly anticipated the main benefits of their wider usage in both his consultation work and in the general practice of surgical pathology. All the more surprisingly, he did all this while remaining a staunch supporter of the continuous role of morphology in standard diagnostic practice . He probably was the very first one to understand that all these innovations, leading to evolving subspecialties of their own and including digital pathology itself, are per se pathology. During his long career as a consultant, Rosai had the chance to witness the steady technical advances that improved the quality and accessibility of digital pathology, starting from the first examples of ‘static’ digital images to the more up to date ‘dynamic’ whole slide imaging . He understood the value of digital pathology and tried to communicate its fundamental pros to the wider scientific community. He was among the first to promote digital pathology as the key to facilitate second opinion consultations, thanks to an easier and faster sharing of digital images, rather than physical glass slides, among geographically distant pathologists . He understood the benefit of being able to carry out reproducible measurements directly on the digital slide, such as tumor width or depth of invasion, and to objectively quantify positive cells during immunohistochemical examinations. He also praised digital pathology for a number of other secondary features, such as the opportunity to manipulate the digital slide and add annotations, and the chance to examine the material at magnifications not easily attainable with traditional microscopy. I was truly fascinated by reading his wholehearted support of digital pathology in an email he sent to the FDA, which clearly showed how strong Rosai’s advocacy of this new discipline was . Lastly, he understood the potential of being able to archive countless digital slides within servers rather than in conventional storage rooms, a feature that proved to be the basis for the creation of his own ‘Rosai Digital Collection’ ( https://www.rosaicollection.org/index.cfm ). For someone as young as me, the sole existence of this collection appears exciting and incredibly stimulating. It helps you realize how vast pathology is as a discipline and grants you the chance to have a look at slides that you would hardly ever see in routine work and, probably, in one’s entire diagnostic career. All the more excitingly, thanks to the digital nature of such a collection, all the material Rosai collected and commented is made freely available to all, from the young trainees to the more seasoned diagnosticians, regardless of geographical location. Without any doubt, all this underlines Rosai’s forerunning openness to the promising educational role digital pathology has to offer. From 2000 to 2005, Rosai moved back to Italy to serve as Chairman of the Pathology Department of the National Cancer Center in Milan. From 2005 onwards, he created and became the director of the International Center for Pathology Consultations of the Italian Diagnostic Center in Milan, with the core aim of providing surgical pathology consultations through digital telepathology for pathologists, clinicians and patients both in Italy and overseas. Going back to the starting question of this journey, Rosai is the surgical pathologist in the truest meaning of the word, embracing all the technologies, from H&E to the digital, to render a diagnosis valuable for both the clinician and the patient. What I imagine now is Juan Rosai rendering diagnoses using digital slides with all their associated benefits, including the AI. Although it is difficult to imagine an AI minimally close to Rosai’s talent as a diagnostician, in all likelihood Rosai himself would have encouraged to pursue further research in the field to create better-performing AI tools and to promote their wider usage by all. So, given Rosai’s strong support to digital pathology, what are we waiting for to embrace and follow his ideas in this field? The following statement by Rosai should erase any lingering doubts and encourage to move on to a fully digital approach . “I would simply conclude by saying than from a technical and scientific standpoint I am thoroughly convinced that a diagnosis made on the basis of a well-prepared digital image of a representative whole section is just as informative and accurate as that performed by using the time-honored examination of a glass slide under the binocular microscope.” “Such an important matter […] I have no doubts will revolutionize the field of pathology, if it is not doing that already.” |
The ionic liquid-assisted sample preparation method pTRUST allows sensitive proteome characterization of a variety of bacterial endospores to aid in the search for protein biomarkers | 6654f6f8-1549-4aa9-9337-2fbafe5fed5a | 11760639 | Biochemistry[mh] | Bacteria normally multiply by repeated equal divisions, a process known as trophic propagation. However, certain bacteria, including those of Bacillus and Clostridium groups, undergo unequal division during nutrient starvation and form specialized structures called endospores (hereafter referred to as spores). Spores are a dormant form of bacteria and are among the organism’s structures that show the greatest resistance to physical and chemical insults . Various spore-producing bacteria are pathogenic. For example, Bacillus cereus and Bacillus weihenstephanensis cause food poisoning, whereas Bacillus anthracis and Clostridium botulinum have been used in bioterrorism. Therefore, establishing rapid and sensitive detection systems for analyzing spore samples is essential for ensuring protection against these pathogens and diseases . Spores are primarily composed of protein, DNA, and small molecules. Proteins are crucial for the formation, resistance, and pathogenicity of spore-forming bacteria . Detailed knowledge of protein targets or protein biomarkers would thus aid in better understanding the molecular mechanisms of these biological processes and improve existing and emerging spore-based detection techniques to guarantee food and consumer safety. Various proteomic approaches have been developed to identify these molecules, primarily using non-pathogenic B . subtilis spores as models . However, published procedures typically use conventional solubilizers, such as sodium dodecyl sulfate (SDS) and urea, to dissolve spore molecules, but the resistance of spore structures to these traditional agents makes it difficult to efficiently analyze proteins. Electron microscopic analysis has revealed that incubation of B . subtilis spores in SDS only disrupts some regions of the spores (for example, the coat and outer membrane), whereas most of the remaining regions (e.g., cortex, inner membrane, and core) remain visible . Ultimately, large amounts of protein (20–800 μg) are typically required for these proteomic procedures , which are labor-intensive and time-consuming. Such large-scale preparations may also reduce the purity of spore samples and increase the risk of pathogenic transmission. Ionic liquids (ILs) are powerful solvent media in biomedical and pharmaceutical applications . We recently reported that i -soln (a mixture of the imidazolium-based IL, 1-butyl-3-methylimidazolium cyanate [bmim][SCN], and NaOH) can completely dissolve highly insoluble heat-aggregated hen egg whites within 10 min . We also developed novel proteomic sample preparation methods (namely pTRUST and its original version, i BOPs ) for direct processing of i -soln-solubilized samples with trypsin using hydrophobic microbeads. The analytical performance of these methods involving the i -soln system allowed the simple and sensitive proteomic characterization of various insoluble samples, including SDS-resistant aggregates deposited in senescent Caenorhabditis elegans and inclusion bodies , in addition to integral membrane proteins from various human cancer cell lines . Very recently, we applied pTRUST to the spore proteome of B . subtilis and demonstrated that highly efficient shotgun analysis of the spore proteome was achieved even with micrograms or less of the starting material . The analytical range observed for pTRUST was 50- to 2,000-fold higher than that previously reported for gel-based or gel-free approaches . However, despite the superiority of this method in analyzing insoluble substances, its application in spore proteomics has been limited to the identification and characterization of resistance proteins in B . subtilis spores . In this study, we evaluated the efficacy and generality of the pTRUST technology using highly purified spores from three spore-forming bacteria other than B . subtilis . We also analyzed the protein targets identified by pTRUST and mass spectrometry (MS), using a bioinformatics program to search potential spore biomarkers. Strains and materials The Bacillus strains used in this study were B . subtilis subsp. natto BEST195, B . licheniformis ATCC 14580, and B . cereus ATCC 10987. Each of these strains produce spores, and their genomic sequences have already been determined. [bmim][SCN] was purchased from Sigma-Aldrich Co. LLC (St. Louis, MO, USA). i -soln was prepared by mixing [bmim][SCN] and 0.5 M NaOH (in water) at a 40:60 (v/v) ratio . POROS R2 microbeads (diameter, 50 μm) were obtained from PerSeptive Biosystems, Inc. (Framingham, MA, USA). Before use, the beads (500 μg) were rinsed with 100 μL of 75% acetonitrile (CH 3 CN) in 0.1% trifluoroacetic acid and 100 μL of 100 mM Tris-HCl (pH 8) and suspended in 200 μL of water . StageTips (polystyrene-divinylbenzene copolymer) was obtained from Nikkyo Technos Co., Ltd. (Bunkyo-ku, Tokyo, Japan). Other materials were purchased as previously described . Preparation and purification of spores Bacteria were grown in Schaeffer’s medium at 37°C as previously described . Spores were harvested 18 h after the cessation of exponential growth, washed in deionized water for several days, and collected upon centrifugation at 12,000 × g for 4 min at 4°C . To purify the spores, the resultant pellets were incubated in 0.1 mL lysozyme buffer (10 mM Tris-HCl, pH 7.2, with 1% [w/v] lysozyme) for 10 min at 37°C and washed repeatedly with 10 mM Tris-HCl (pH 7.2) and 0.5 M NaCl at 25°C. More than 99% phase-bright spores and almost no dark or gray spores were obtained for all three bacterial samples, as assessed using phase-contrast microscopy . The spores thus purified were resuspended in 10 mM Tris-HCl (pH 7.2) and frozen at -80°C. The purified spores were counted using colony formation assays on agar plates, as described previously . The protein concentrations of the spores were determined using the Bradford assay , with bovine serum albumin as the standard. Spore lysis assay To lyse the spores (2–4 × 10 8 cfu), 1 mL i -soln was added and the mixture was incubated at 20°C using three cycles of ultrasonication (2 min) and agitation (1 min) in a water bath sonicator (ASU-10D; AS ONE Corporation, Osaka, Japan) and a vortex mixer, respectively . Control experiments were performed in 1 mL water with or without sonication or 1 mL of 1% SDS with boiling for 3 min. The dissolution efficiency was assessed by measuring the turbidity value of the resulting solution at 600 nm using a UV-vis spectrophotometer (SmartSpecTM Plus; Bio-Rad Laboratories, Inc., Hercules, CA, USA). MS sample preparation using pTRUST MS samples were prepared according to the previously defined pTRUST protocol . For reduction of disulfide bridges, purified spores (5–6 × 10 6 cfu each) containing 1 μg protein were incubated with 20 mM Tris(2-carboxyethyl)phosphine in 50 μL i -soln using three cycles of ultrasonication (2 min) and agitation (1 min) at 20°C. The reduced samples were treated with 40 mM iodoacetamide in the dark for 20 min to alkylate the free cysteines. Subsequently, the samples were mixed with the above R2 suspension, agitated for 1 min with a vortex mixer, and then allowed to stand for 1 min for protein adsorption onto the beads. After repeating the adsorption step four times, the bead–protein mixture was pipetted into a StageTip container and centrifuged at 2,000 × g for 30 s at 20°C to remove any excess i -soln from the beads that were retained on the StageTip filter . The retained beads were sequentially washed with 100 μL Tris buffer (100 mM Tris-HCl, pH 8.0), 100 μL acetone twice, 100 μL Tris buffer, and 100 μL water via centrifugation under the same conditions. Trypsin digestion was performed at 37°C overnight with 0.5 μg trypsin in 20 μL trypsin digestion buffer (5 mM Tris, 60% CH 3 CN, pH 8.8) in a sealed StageTip container with rotation . LC-MS/MS analysis and protein identification The peptide samples were recovered from the beads using centrifugation and underwent LC-MS/MS analysis as described . The MS/MS data were converted into the Mascot-compatible data format using Proteome Discoverer (version 3.0; Thermo Fisher Scientific K.K., Tokyo, Japan) and the database search was performed using Mascot software (version 2.3.02; Matrix Science K.K., Tokyo, Japan) against UniProt B . subtilis subsp. natto BEST195 (taxid:645657), B . licheniformis strain ATCC 14580 (taxid:279010), and B . cereus strain ATCC 10987 (taxid:222523) proteome databases. The search parameters are the same as previously described: fixed modification for carbamidomethyl (C), variable modifications for acetylation (protein N-terminus) and oxidation (Met), maximum missed cleavage at 1, peptide mass tolerance of ±25 ppm, and MS/MS tolerance of ±0.8 Da . Peptide identification threshold was based on the Mascot score p<0.05, which is commonly used and was validated in practice by our previous works . BLAST search All BLAST searches were performed on the servers of the National Center for Biotechnology Information (NCBI). The identified proteins were searched against the NCBI non-redundant B . subtilis protein sequence database ( Bacillus subtilis subsp. subtilis 168 [taxid:224308]) using the NCBI protein BLAST tool ( https://blast.ncbi.nlm.nih.gov/Blast.cgi ) version 2.15.0+ with preset algorithm parameters. Only sequences with >50% amino acid sequence identity and >60% alignment of protein sequences were considered putative orthologs of corresponding B . subtilis proteins. Some of the identified proteins (indicated in the text) were also searched against the corresponding NCBI non-redundant bacterial and whole-organism protein databases using the BLAST tool. The Bacillus strains used in this study were B . subtilis subsp. natto BEST195, B . licheniformis ATCC 14580, and B . cereus ATCC 10987. Each of these strains produce spores, and their genomic sequences have already been determined. [bmim][SCN] was purchased from Sigma-Aldrich Co. LLC (St. Louis, MO, USA). i -soln was prepared by mixing [bmim][SCN] and 0.5 M NaOH (in water) at a 40:60 (v/v) ratio . POROS R2 microbeads (diameter, 50 μm) were obtained from PerSeptive Biosystems, Inc. (Framingham, MA, USA). Before use, the beads (500 μg) were rinsed with 100 μL of 75% acetonitrile (CH 3 CN) in 0.1% trifluoroacetic acid and 100 μL of 100 mM Tris-HCl (pH 8) and suspended in 200 μL of water . StageTips (polystyrene-divinylbenzene copolymer) was obtained from Nikkyo Technos Co., Ltd. (Bunkyo-ku, Tokyo, Japan). Other materials were purchased as previously described . Bacteria were grown in Schaeffer’s medium at 37°C as previously described . Spores were harvested 18 h after the cessation of exponential growth, washed in deionized water for several days, and collected upon centrifugation at 12,000 × g for 4 min at 4°C . To purify the spores, the resultant pellets were incubated in 0.1 mL lysozyme buffer (10 mM Tris-HCl, pH 7.2, with 1% [w/v] lysozyme) for 10 min at 37°C and washed repeatedly with 10 mM Tris-HCl (pH 7.2) and 0.5 M NaCl at 25°C. More than 99% phase-bright spores and almost no dark or gray spores were obtained for all three bacterial samples, as assessed using phase-contrast microscopy . The spores thus purified were resuspended in 10 mM Tris-HCl (pH 7.2) and frozen at -80°C. The purified spores were counted using colony formation assays on agar plates, as described previously . The protein concentrations of the spores were determined using the Bradford assay , with bovine serum albumin as the standard. To lyse the spores (2–4 × 10 8 cfu), 1 mL i -soln was added and the mixture was incubated at 20°C using three cycles of ultrasonication (2 min) and agitation (1 min) in a water bath sonicator (ASU-10D; AS ONE Corporation, Osaka, Japan) and a vortex mixer, respectively . Control experiments were performed in 1 mL water with or without sonication or 1 mL of 1% SDS with boiling for 3 min. The dissolution efficiency was assessed by measuring the turbidity value of the resulting solution at 600 nm using a UV-vis spectrophotometer (SmartSpecTM Plus; Bio-Rad Laboratories, Inc., Hercules, CA, USA). MS samples were prepared according to the previously defined pTRUST protocol . For reduction of disulfide bridges, purified spores (5–6 × 10 6 cfu each) containing 1 μg protein were incubated with 20 mM Tris(2-carboxyethyl)phosphine in 50 μL i -soln using three cycles of ultrasonication (2 min) and agitation (1 min) at 20°C. The reduced samples were treated with 40 mM iodoacetamide in the dark for 20 min to alkylate the free cysteines. Subsequently, the samples were mixed with the above R2 suspension, agitated for 1 min with a vortex mixer, and then allowed to stand for 1 min for protein adsorption onto the beads. After repeating the adsorption step four times, the bead–protein mixture was pipetted into a StageTip container and centrifuged at 2,000 × g for 30 s at 20°C to remove any excess i -soln from the beads that were retained on the StageTip filter . The retained beads were sequentially washed with 100 μL Tris buffer (100 mM Tris-HCl, pH 8.0), 100 μL acetone twice, 100 μL Tris buffer, and 100 μL water via centrifugation under the same conditions. Trypsin digestion was performed at 37°C overnight with 0.5 μg trypsin in 20 μL trypsin digestion buffer (5 mM Tris, 60% CH 3 CN, pH 8.8) in a sealed StageTip container with rotation . The peptide samples were recovered from the beads using centrifugation and underwent LC-MS/MS analysis as described . The MS/MS data were converted into the Mascot-compatible data format using Proteome Discoverer (version 3.0; Thermo Fisher Scientific K.K., Tokyo, Japan) and the database search was performed using Mascot software (version 2.3.02; Matrix Science K.K., Tokyo, Japan) against UniProt B . subtilis subsp. natto BEST195 (taxid:645657), B . licheniformis strain ATCC 14580 (taxid:279010), and B . cereus strain ATCC 10987 (taxid:222523) proteome databases. The search parameters are the same as previously described: fixed modification for carbamidomethyl (C), variable modifications for acetylation (protein N-terminus) and oxidation (Met), maximum missed cleavage at 1, peptide mass tolerance of ±25 ppm, and MS/MS tolerance of ±0.8 Da . Peptide identification threshold was based on the Mascot score p<0.05, which is commonly used and was validated in practice by our previous works . All BLAST searches were performed on the servers of the National Center for Biotechnology Information (NCBI). The identified proteins were searched against the NCBI non-redundant B . subtilis protein sequence database ( Bacillus subtilis subsp. subtilis 168 [taxid:224308]) using the NCBI protein BLAST tool ( https://blast.ncbi.nlm.nih.gov/Blast.cgi ) version 2.15.0+ with preset algorithm parameters. Only sequences with >50% amino acid sequence identity and >60% alignment of protein sequences were considered putative orthologs of corresponding B . subtilis proteins. Some of the identified proteins (indicated in the text) were also searched against the corresponding NCBI non-redundant bacterial and whole-organism protein databases using the BLAST tool. Lysis of distinct bacterial spores with i -soln We recently reported that i -soln can lyse B . subtilis (strain 168) spores with high efficiency by sonication only . To determine whether i -soln is also effective for dissolving different bacterial spores, highly purified spores from the closely related subspecies B . subtilis subsp. natto (BEST195 strain) and two other species, B . licheniformis (ATCC 14580 strain) and B . cereu s (ATCC 10987 strain), were incubated in i -soln. As demonstrated in , i -soln showed the highest dissolution efficiency at all time points in all samples compared with the controls suspended in water (with no treatment), sonicated in water as mentioned above, or boiled in 1% SDS, as assessed based on the turbidity (OD 600 nm ) of the resulting solution. In particular, the values in i -soln were approximately 20–30% of the no-treatment control values in all samples, even after 0 h. These values are consistent with those reported for B . subtilis spores . Thus, i -soln can be applied directly to efficiently dissolve these bacterial spores. Proteomic identification and characterization of three Bacillus spores using pTRUST and LC-MS/MS To evaluate the efficiency and applicability of the present pTRUST method, purified spores solubilized with i -soln (each 1 μg protein) were processed with pTRUST in triplicates, and the resulting polypeptides were analyzed using LC-MS/MS. As shown in , approximately 180–200 (for B . subtilis natto and B . licheniformis ) and 300 (for B . cereus ) proteins were consistently identified in each MS run, with good repeatability in the present analysis, even at the low-level (1 μg) quantity (see also – Tables). To characterize the identified proteins, we merged all identifications from each sample (a total of 289, 259, and 437 proteins from the spores of B . subtilis natto, B . licheniformis , and B . cereus , respectively, – Tables) and analyzed their amino acid sequences using the UniProt protein database for molecular weight, isoelectric point, and GRAVY (grand average of hydropathicity) value. This analysis revealed a wide variety of biochemical properties attributable to the identified proteins ( – Tables), which were considered unbiased identifications using pTRUST for these parameters, as previously described . Furthermore, the identified proteins included many known sporulation-related factors, such as those for spore coat Cot proteins, germination-associated Ger proteins, and a number of ribosomal subunits ( – Tables). Thus, the pTRUST method with the i -soln system efficiently processed these spore preparations for sensitive MS analysis, as reported for B . subtilis spores . Identification of putative protein biomarkers for detecting various or specific spores using BLAST search To further characterize the identified proteins, their amino acid sequences were compared with those of the NCBI B . subtilis (strain 168) protein database using the BLAST search program. In relation to the phylogenetic distance between these Bacillus species , 231 (93.8%), 200 (77.2%), and 221 (49.0%) of the total proteins identified from the B . subtilis natto, B . licheniformis , and B . cereus spores, respectively, showed strong sequence identity with >50% of the corresponding B . subtilis proteins, even with >60% alignments of the protein sequences ( – Tables, marked in yellow). The high degree of conservation suggests that these are orthologous proteins that may have similar biological functions. Of these orthologs, a set of 25 proteins, comprising 14 of the sporulation-related proteins (CotE, CotJA, CotJC, CwlJ, DacF, GerQ, SpoIVA, SpoVS, SpoVIF, SspB, YabG, YdcC, YloB, and YqfC) (annotated in the SubtiWiki database), and 2 proteins involved in metabolism (AcpA and Mdh), 2 in DNA/RNA binding (Hbs and Hfq), 3 in protein translation (Tfu, RL2, and RL7), 2 in transport (PthP and YfkD), 1 in stress response (TrxA), and 2 uncharacterized proteins (YkfD and YtfJ), were common to all of the aforementioned spores, and also to the B . subtilis spores shown in a previous study and in our recent unpublished work (for PtsH and YtfJ) . Further comparative studies using the reported amino acid sequences from 43 spore-forming bacteria associated with industrial dairy processing environments and product spoilage revealed that 16 of these bacteria shared the same orthologs with all the 25 selected proteins (>50% identity) and that 42 bacteria shared 15 or more proteins (except for Sporosarcina aquimarina , 12 proteins) using the BLAST tool . We also found that three of the four proteins (CotJC, DacF, and SpoIVA) common among B . subtilis 168, Clostridium difficile 630, and B . cereus 14579 spores are included in our list. Thus, the set of 25 proteins identified by pTRUST and LC-MS/MS represent the most likely universal biomarkers for detecting spores in various samples. Using BLAST, we then compared proteins with no sequence homology (0% identity) with B . subtilis (strain 168) proteins (161 proteins in total; see – Tables) with those in the NCBI whole-organism protein database. We confirmed that, despite the lack of orthologs in the B . subtilis strain, many other bacterial species shared orthologous proteins (>50% identity) with the corresponding 161 proteins. Among these, at least nine proteins of B . subtilis natto (Accession No., A0A060PFM3, A0A060PFU1, A0A060PPB5, A0A060PPK7, D4FX13, D4FX61, D4FZQ5, D4G6U3, D4G799; ) appear to be the products of horizontal gene transfer (HGT) , because these orthologs are not present in the same species B . subtilis (strain 168). However, only two proteins did not meet the above criteria. One was a D4FV94 protein in B . subtilis natto (Accession No. D4FV94_BACNB), and the other was a GntR family transcriptional Q737A2 regulator in B . cereus (Accession No. Q737A2_BACC1) . There are no known functions for these two proteins. However, only a very few number of bacterial hypothetical proteins in the databases (e.g., that from B . safensis , B . pumilus , and C . algoriphilum ) showed weak sequence similarity to the DAFV94 protein (<38% identity). In contrast, no proteins homologous to the transcriptional Q737A2 regulator in B . cereus were found in the whole-organism database. Thus, the two proteins identified in this study may be species-specific spore biomarkers whose orthologs are absent or rare in all organisms. i -soln We recently reported that i -soln can lyse B . subtilis (strain 168) spores with high efficiency by sonication only . To determine whether i -soln is also effective for dissolving different bacterial spores, highly purified spores from the closely related subspecies B . subtilis subsp. natto (BEST195 strain) and two other species, B . licheniformis (ATCC 14580 strain) and B . cereu s (ATCC 10987 strain), were incubated in i -soln. As demonstrated in , i -soln showed the highest dissolution efficiency at all time points in all samples compared with the controls suspended in water (with no treatment), sonicated in water as mentioned above, or boiled in 1% SDS, as assessed based on the turbidity (OD 600 nm ) of the resulting solution. In particular, the values in i -soln were approximately 20–30% of the no-treatment control values in all samples, even after 0 h. These values are consistent with those reported for B . subtilis spores . Thus, i -soln can be applied directly to efficiently dissolve these bacterial spores. Bacillus spores using pTRUST and LC-MS/MS To evaluate the efficiency and applicability of the present pTRUST method, purified spores solubilized with i -soln (each 1 μg protein) were processed with pTRUST in triplicates, and the resulting polypeptides were analyzed using LC-MS/MS. As shown in , approximately 180–200 (for B . subtilis natto and B . licheniformis ) and 300 (for B . cereus ) proteins were consistently identified in each MS run, with good repeatability in the present analysis, even at the low-level (1 μg) quantity (see also – Tables). To characterize the identified proteins, we merged all identifications from each sample (a total of 289, 259, and 437 proteins from the spores of B . subtilis natto, B . licheniformis , and B . cereus , respectively, – Tables) and analyzed their amino acid sequences using the UniProt protein database for molecular weight, isoelectric point, and GRAVY (grand average of hydropathicity) value. This analysis revealed a wide variety of biochemical properties attributable to the identified proteins ( – Tables), which were considered unbiased identifications using pTRUST for these parameters, as previously described . Furthermore, the identified proteins included many known sporulation-related factors, such as those for spore coat Cot proteins, germination-associated Ger proteins, and a number of ribosomal subunits ( – Tables). Thus, the pTRUST method with the i -soln system efficiently processed these spore preparations for sensitive MS analysis, as reported for B . subtilis spores . To further characterize the identified proteins, their amino acid sequences were compared with those of the NCBI B . subtilis (strain 168) protein database using the BLAST search program. In relation to the phylogenetic distance between these Bacillus species , 231 (93.8%), 200 (77.2%), and 221 (49.0%) of the total proteins identified from the B . subtilis natto, B . licheniformis , and B . cereus spores, respectively, showed strong sequence identity with >50% of the corresponding B . subtilis proteins, even with >60% alignments of the protein sequences ( – Tables, marked in yellow). The high degree of conservation suggests that these are orthologous proteins that may have similar biological functions. Of these orthologs, a set of 25 proteins, comprising 14 of the sporulation-related proteins (CotE, CotJA, CotJC, CwlJ, DacF, GerQ, SpoIVA, SpoVS, SpoVIF, SspB, YabG, YdcC, YloB, and YqfC) (annotated in the SubtiWiki database), and 2 proteins involved in metabolism (AcpA and Mdh), 2 in DNA/RNA binding (Hbs and Hfq), 3 in protein translation (Tfu, RL2, and RL7), 2 in transport (PthP and YfkD), 1 in stress response (TrxA), and 2 uncharacterized proteins (YkfD and YtfJ), were common to all of the aforementioned spores, and also to the B . subtilis spores shown in a previous study and in our recent unpublished work (for PtsH and YtfJ) . Further comparative studies using the reported amino acid sequences from 43 spore-forming bacteria associated with industrial dairy processing environments and product spoilage revealed that 16 of these bacteria shared the same orthologs with all the 25 selected proteins (>50% identity) and that 42 bacteria shared 15 or more proteins (except for Sporosarcina aquimarina , 12 proteins) using the BLAST tool . We also found that three of the four proteins (CotJC, DacF, and SpoIVA) common among B . subtilis 168, Clostridium difficile 630, and B . cereus 14579 spores are included in our list. Thus, the set of 25 proteins identified by pTRUST and LC-MS/MS represent the most likely universal biomarkers for detecting spores in various samples. Using BLAST, we then compared proteins with no sequence homology (0% identity) with B . subtilis (strain 168) proteins (161 proteins in total; see – Tables) with those in the NCBI whole-organism protein database. We confirmed that, despite the lack of orthologs in the B . subtilis strain, many other bacterial species shared orthologous proteins (>50% identity) with the corresponding 161 proteins. Among these, at least nine proteins of B . subtilis natto (Accession No., A0A060PFM3, A0A060PFU1, A0A060PPB5, A0A060PPK7, D4FX13, D4FX61, D4FZQ5, D4G6U3, D4G799; ) appear to be the products of horizontal gene transfer (HGT) , because these orthologs are not present in the same species B . subtilis (strain 168). However, only two proteins did not meet the above criteria. One was a D4FV94 protein in B . subtilis natto (Accession No. D4FV94_BACNB), and the other was a GntR family transcriptional Q737A2 regulator in B . cereus (Accession No. Q737A2_BACC1) . There are no known functions for these two proteins. However, only a very few number of bacterial hypothetical proteins in the databases (e.g., that from B . safensis , B . pumilus , and C . algoriphilum ) showed weak sequence similarity to the DAFV94 protein (<38% identity). In contrast, no proteins homologous to the transcriptional Q737A2 regulator in B . cereus were found in the whole-organism database. Thus, the two proteins identified in this study may be species-specific spore biomarkers whose orthologs are absent or rare in all organisms. Proteomic analysis of spore samples remains a major challenge, owing to poor solubilization and extraction yields. In the present study, we showed that pTRUST and LC-MS/MS facilitated the rapid solubilization and processing of multiple proteins, including those characterized and uncharacterized previously, from trace amounts of purified spore preparations ( – Tables). To the best of our knowledge, this study is the first report on the proteomic characterization of B . subtilis natto and B . licheniformis spores and the first description of their proteomes directly associated with purified preparations (although there are reported cases of B . cereus spores ). These results support the expanded use of pTRUST in spore proteomics, which has so far been limited to the identification and characterization of resistance proteins in B . subtilis spores . The pTRUST method has several advantages for spore analysis. ( i ) i -soln can dissolve spores more effectively than conventional solubilizers such as SDS , improving the efficiency of sample preparation for high-spec MS analysis. ( ii ) pTRUST is simple and does not require the additional sample purification steps necessary in previous methods such as PAGE or hydrophobic chromatography . ( iii ) pTRUST enables efficient processing of a variety of low-abundance (or low-concentration) spore samples ( , – Tables). Indeed, the pTRUST protocol using hydrophobic R2 bead supports can quantitatively capture most proteins with no selectivity and enhance the catalytic activity of trypsin and solubility of tryptic peptides during the digestion reaction . Furthermore, the small-sized StageTip container used for trypsin digestion can stimulate small-scale enzymatic digestion (<20 μL) by decreasing the surface area involved in non-specific adsorption losses of proteolytic peptides . Therefore, we propose pTRUST as one of the simplest and most practical platforms for characterizing spore proteins and biomarkers that are otherwise difficult to detect because of their low abundance. The direct detection of spores is critical for determining microbial contamination in various types of food and environmental samples and for protecting against natural infections and biological threats . Various techniques targeting spore nucleic acids, metabolites (such as dipicolinic acid and ATP), and proteins have been exploited to detect spores, but each of the traditional methods has several shortcomings, especially those involving their stability ; thus, the discovery and characterization of new targets is expected. In this regard, one notable trend in the present study is that many proteins identified in the purified spore samples have orthologs between the used species as well as other bacteria ( , – Tables). Such factors may play key roles in general spore physiology, including sporulation, germination and outgrowth to vegetative cells. In contrast, the D4FV94 protein and GntR family transcriptional Q737A2 regulator are rare or absent in the organism-wide database . Although their functions have not been characterized, they may be involved in species-specific spore phenomena. We previously produced green fluorescent protein (GFP) fusions from 20 newly identified B . subtilis proteins and demonstrated, using fluorescence microscopy, that all these candidates were authentic spore components . Validating proteomic data using such an alternative approach can effectively corroborate the accuracy and reliability of identification. Therefore, for a more in-depth evaluation, similar cell-imaging assays using GFP will be necessary to test the validity of the identified proteins and candidate biomarkers. In conclusion, the pTRUST method involving the i -soln system allowed us to identify various previously uncharacterized proteins and potential biomarkers that may be associated with spores. The pTRUST technology improves upon other current approaches and is likely to be useful as a general procedure for sensitive spore characterization at the protein level. The pTRUST protocol is rapid, with a full cycle time of only 45 min before trypsin digestion. When used in combination with conventional quantitative techniques such as stable isotope labeling and label-free methods , this technology can also be adapted without modification to sensitive spore-protein dynamic analysis. We believe therefore that pTRUST opens new avenue of investigation for a wide range of biological and therapeutic applications in spore research. S1 Table Proteins identified using pTRUST and liquid chromatography with tandem mass spectrometry (LC-MS/MS) in highly purified Bacillus subtilis subsp. natto spores. (XLSX) S2 Table Proteins identified using pTRUST and LC-MS/MS in highly purified Bacillus licheniformis spores. (XLSX) S3 Table Proteins identified using pTRUST and LC-MS/MS in highly purified Bacillus cereus spores. (XLSX) S4 Table Characterization of proteins identified using pTRUST and LC-MS/MS in highly purified Bacillus subtilis subsp. natto spores. (XLSX) S5 Table Characterization of proteins identified using pTRUST and LC-MS/MS in highly purified Bacillus licheniformis spores. (XLSX) S6 Table Characterization of proteins identified using pTRUST and LC-MS/MS in highly purified Bacillus cereus spores. (XLSX) S7 Table Potential orthologous proteins identified in all four Bacillus spores. (XLSX) S8 Table Potential orthologous proteins found in the 43 spore-producing bacteria associated with dairy processing and products. (XLSX) |
A system-wide approach to digital equity: the Digital Access Coordinator program in primary care | 632dbb04-c497-4171-a886-20215a2c2163 | 11187422 | Patient-Centered Care[mh] | The rapid transition to a digital front door of healthcare prompted by the pandemic made evident disparities in who had access to digital tools. Like other healthcare organizations, at the beginning of the pandemic, we found disparities in access to patient portals and video visits. For example, Spanish-speaking patients were 43% less likely to use video visits compared to English-speaking patients. Our organization was tasked with increasing digital access as part of broader health equity efforts. Digital disparities are driven by multiple factors including lack of internet access, devices, language-adapted platforms as well as limited digital literacy. Our organization undertook initiatives to address these factors including a device loaning program, patient portal translation, and digital literacy support. In this case report, we focus on addressing digital literacy gaps by implementing a digital navigation program to improve disparities in patient portal enrollment. Digital navigation has been identified as a potential solution for digital disparities by providing support for patients with limited digital literacy, but large-scale integration of digital navigation in the healthcare setting has been limited. The Digital Patient Experience team at Mass General Brigham (MGB) aimed to increase digital access by implementing a digital navigation program. Our goals were to develop, implement, and evaluate a system-wide digital navigation program that supported patients in enrolling in our patient portal.
Program description We established the Digital Access Coordinator (DAC) program whose goal is to address gaps in digital literacy among MGB’s primary care population. MGB has 1.25 million patients across 1211 clinicians. The DACs are a team of 12 digital navigators who are multilingual and representative of the diverse backgrounds of our patients. They speak the top 6 non-English languages spoken by our patients: Spanish, Portuguese, Haitian-Creole, Russian, Cantonese/Mandarin, and Arabic. DACs help patients enroll in our portal, which runs on Epic’s MyChart and was translated into the top 6 non-English languages our patients speak as part of an earlier initiative to provide a linguistically appropriate user interface. The translation was done by our organization. Epic supports limited translation in some languages. They did not support all needed languages that our patients speak. For those languages that they did support, it was limited, and we had additional customized content that required translation. DACs enroll patients (and/or care partners) in the portal and acquaint them with key features, such as secure messaging, medication renewals, or checking test results. They also educate patients on how to use external apps for virtual visits and remote patient monitoring, all of which integrate with the portal. To align with organizational strategy, we implemented the DAC program in our primary care clinics. Implementation team We had a team structure where DACs were managed centrally but deployed locally at each clinic. This structure helped us manage the scope of DAC work, troubleshoot operational issues, and create efficient communication pathways. The team consisted of: 12 DACs covering MGB’s top 6 non-English languages distributed according to the linguistic needs of our patients (ie, hired more Spanish-speaking DACs since this was our most common non-English language) (12 FTEs). Central administrative staff Medical Director (0.1 FTE) Administrative Director (0.3 FTE) Senior Program Manager (1 FTE) Project Manager (1 FTE) for day-to-day DAC operations Implementation process To establish the program, we performed a needs assessment, partnered with key stakeholders, established models for digital literacy support, developed workflows and resources for DACs, and defined hiring criteria for DACs. Perform digital equity assessment across our organization We performed a digital equity assessment to identify gaps in digital access. While we wanted to focus on multiple forms of digital access, we decided to focus on portal enrollments since it serves as the main patient-facing tool. This assessment revealed a range of needs with some clinics having low portal enrollments and others having most patients enrolled. Given the equity focus of the program, we also stratified all data by race, ethnicity, age, and language. Stratification by language was particularly important since it determined which DAC was most appropriate for each clinic. Partner with key stakeholders As part of the implementation strategy, we partnered with clinical and health information technology (HIT) leadership. We obtained buy-in from clinical leadership at each site since the deployment of the DAC program required workflow changes and staff to refer patients. This was critical early on as the role of the DAC was new to clinical teams. To maintain leadership engagement, we held monthly meetings with all sites where we provided updates and shared portal enrollment data. Collaboration with HIT leadership facilitated the building of key resources in the electronic health record (EHR), including referral orders, reports, and program analytics. Establish DAC program models Using this assessment, we developed 2 models of digital navigation: an embedded model and a central model. Embedded model For clinics that had lower portal enrollments at baseline, the embedded model integrated an on-site DAC with the clinic staff, space, and workflows. Support was delivered to patients in-person. Central model For clinics that had higher portal enrollments at baseline, the central model provided remote support. Patients needing support were identified via EHR referral order received from clinical teams or patient reports generated by the program. Support was delivered to patients via telephone. Having these 2 models allowed us to extend the program across more primary care clinics. We piloted both models and were able to improve upon challenges that we found during the pilots. This allowed the program to launch at additional sites more seamlessly. As we expanded the program, we developed an implementation guide to onboard new clinics, especially those using the embedded model ( ). Develop workflows and DAC resources The introduction of DACs into the clinical team required the development of workflows that allowed DACs to interact with patients and enroll them in the portal. Overall, the DAC workflows had 4 key components: (1) identification, (2) outreach, (3) enrollment, and (4) education ( ). We leveraged EHR referral orders, patient registries, worklists, and automatic documentation tools to align with these workflows and support data analytics for reporting and evaluation. For example, we deployed an “Ambulatory Referral to DAC” order that staff could submit, and it would populate a worklist that the DACs would use to follow up with patients. We established DAC specific work areas and communication pathways. For the embedded model, we created dedicated clinic spaces where the DAC would sit and have enough space to interact with patients. Similarly, for the central model, we set up offices where the DACs could perform phone outreach. To support the DACs, we developed scripts that described the process of enrolling in the portal. Given the multilingual nature of the program, we worked with HIT teams to set up phone trees that would allow for DACs to connect patients with DACs that spoke the patient’s preferred language. There was also interpreter support available if the patient’s preferred language did not match the DAC’s spoken language. Clinics were also given postcards and flyers to advertise the program. DAC hiring, onboarding, and training Since this was a new role, we did not have clear hiring criteria or an onboarding process. We had to iterate and review external resources to determine the appropriate skill sets for a DAC. We prioritized DACs that spoke one of our top 6 languages. All candidates completed a language proficiency assessment in both English and their second language. Using a third-party assessment for non-English language proficiency helped ensure our DACs met the linguistic needs of our patients. DACs were expected to have some technology knowledge (ie, different devices, Microsoft Office), but not expected to be technically savvy. We were able to give DACs technical training as needed. We emphasized candidates who had strong customer service and communication skills. As the program developed, we established clear hiring criteria and could be explicit with new DACs about their expected daily work. Upon hiring, they went through a 2-week onboarding process where they received training on the portal, EHR, and workflows.
We established the Digital Access Coordinator (DAC) program whose goal is to address gaps in digital literacy among MGB’s primary care population. MGB has 1.25 million patients across 1211 clinicians. The DACs are a team of 12 digital navigators who are multilingual and representative of the diverse backgrounds of our patients. They speak the top 6 non-English languages spoken by our patients: Spanish, Portuguese, Haitian-Creole, Russian, Cantonese/Mandarin, and Arabic. DACs help patients enroll in our portal, which runs on Epic’s MyChart and was translated into the top 6 non-English languages our patients speak as part of an earlier initiative to provide a linguistically appropriate user interface. The translation was done by our organization. Epic supports limited translation in some languages. They did not support all needed languages that our patients speak. For those languages that they did support, it was limited, and we had additional customized content that required translation. DACs enroll patients (and/or care partners) in the portal and acquaint them with key features, such as secure messaging, medication renewals, or checking test results. They also educate patients on how to use external apps for virtual visits and remote patient monitoring, all of which integrate with the portal. To align with organizational strategy, we implemented the DAC program in our primary care clinics.
We had a team structure where DACs were managed centrally but deployed locally at each clinic. This structure helped us manage the scope of DAC work, troubleshoot operational issues, and create efficient communication pathways. The team consisted of: 12 DACs covering MGB’s top 6 non-English languages distributed according to the linguistic needs of our patients (ie, hired more Spanish-speaking DACs since this was our most common non-English language) (12 FTEs). Central administrative staff Medical Director (0.1 FTE) Administrative Director (0.3 FTE) Senior Program Manager (1 FTE) Project Manager (1 FTE) for day-to-day DAC operations
To establish the program, we performed a needs assessment, partnered with key stakeholders, established models for digital literacy support, developed workflows and resources for DACs, and defined hiring criteria for DACs. Perform digital equity assessment across our organization We performed a digital equity assessment to identify gaps in digital access. While we wanted to focus on multiple forms of digital access, we decided to focus on portal enrollments since it serves as the main patient-facing tool. This assessment revealed a range of needs with some clinics having low portal enrollments and others having most patients enrolled. Given the equity focus of the program, we also stratified all data by race, ethnicity, age, and language. Stratification by language was particularly important since it determined which DAC was most appropriate for each clinic. Partner with key stakeholders As part of the implementation strategy, we partnered with clinical and health information technology (HIT) leadership. We obtained buy-in from clinical leadership at each site since the deployment of the DAC program required workflow changes and staff to refer patients. This was critical early on as the role of the DAC was new to clinical teams. To maintain leadership engagement, we held monthly meetings with all sites where we provided updates and shared portal enrollment data. Collaboration with HIT leadership facilitated the building of key resources in the electronic health record (EHR), including referral orders, reports, and program analytics. Establish DAC program models Using this assessment, we developed 2 models of digital navigation: an embedded model and a central model. Embedded model For clinics that had lower portal enrollments at baseline, the embedded model integrated an on-site DAC with the clinic staff, space, and workflows. Support was delivered to patients in-person. Central model For clinics that had higher portal enrollments at baseline, the central model provided remote support. Patients needing support were identified via EHR referral order received from clinical teams or patient reports generated by the program. Support was delivered to patients via telephone. Having these 2 models allowed us to extend the program across more primary care clinics. We piloted both models and were able to improve upon challenges that we found during the pilots. This allowed the program to launch at additional sites more seamlessly. As we expanded the program, we developed an implementation guide to onboard new clinics, especially those using the embedded model ( ). Develop workflows and DAC resources The introduction of DACs into the clinical team required the development of workflows that allowed DACs to interact with patients and enroll them in the portal. Overall, the DAC workflows had 4 key components: (1) identification, (2) outreach, (3) enrollment, and (4) education ( ). We leveraged EHR referral orders, patient registries, worklists, and automatic documentation tools to align with these workflows and support data analytics for reporting and evaluation. For example, we deployed an “Ambulatory Referral to DAC” order that staff could submit, and it would populate a worklist that the DACs would use to follow up with patients. We established DAC specific work areas and communication pathways. For the embedded model, we created dedicated clinic spaces where the DAC would sit and have enough space to interact with patients. Similarly, for the central model, we set up offices where the DACs could perform phone outreach. To support the DACs, we developed scripts that described the process of enrolling in the portal. Given the multilingual nature of the program, we worked with HIT teams to set up phone trees that would allow for DACs to connect patients with DACs that spoke the patient’s preferred language. There was also interpreter support available if the patient’s preferred language did not match the DAC’s spoken language. Clinics were also given postcards and flyers to advertise the program. DAC hiring, onboarding, and training Since this was a new role, we did not have clear hiring criteria or an onboarding process. We had to iterate and review external resources to determine the appropriate skill sets for a DAC. We prioritized DACs that spoke one of our top 6 languages. All candidates completed a language proficiency assessment in both English and their second language. Using a third-party assessment for non-English language proficiency helped ensure our DACs met the linguistic needs of our patients. DACs were expected to have some technology knowledge (ie, different devices, Microsoft Office), but not expected to be technically savvy. We were able to give DACs technical training as needed. We emphasized candidates who had strong customer service and communication skills. As the program developed, we established clear hiring criteria and could be explicit with new DACs about their expected daily work. Upon hiring, they went through a 2-week onboarding process where they received training on the portal, EHR, and workflows.
We performed a digital equity assessment to identify gaps in digital access. While we wanted to focus on multiple forms of digital access, we decided to focus on portal enrollments since it serves as the main patient-facing tool. This assessment revealed a range of needs with some clinics having low portal enrollments and others having most patients enrolled. Given the equity focus of the program, we also stratified all data by race, ethnicity, age, and language. Stratification by language was particularly important since it determined which DAC was most appropriate for each clinic.
As part of the implementation strategy, we partnered with clinical and health information technology (HIT) leadership. We obtained buy-in from clinical leadership at each site since the deployment of the DAC program required workflow changes and staff to refer patients. This was critical early on as the role of the DAC was new to clinical teams. To maintain leadership engagement, we held monthly meetings with all sites where we provided updates and shared portal enrollment data. Collaboration with HIT leadership facilitated the building of key resources in the electronic health record (EHR), including referral orders, reports, and program analytics.
Using this assessment, we developed 2 models of digital navigation: an embedded model and a central model. Embedded model For clinics that had lower portal enrollments at baseline, the embedded model integrated an on-site DAC with the clinic staff, space, and workflows. Support was delivered to patients in-person. Central model For clinics that had higher portal enrollments at baseline, the central model provided remote support. Patients needing support were identified via EHR referral order received from clinical teams or patient reports generated by the program. Support was delivered to patients via telephone. Having these 2 models allowed us to extend the program across more primary care clinics. We piloted both models and were able to improve upon challenges that we found during the pilots. This allowed the program to launch at additional sites more seamlessly. As we expanded the program, we developed an implementation guide to onboard new clinics, especially those using the embedded model ( ).
The introduction of DACs into the clinical team required the development of workflows that allowed DACs to interact with patients and enroll them in the portal. Overall, the DAC workflows had 4 key components: (1) identification, (2) outreach, (3) enrollment, and (4) education ( ). We leveraged EHR referral orders, patient registries, worklists, and automatic documentation tools to align with these workflows and support data analytics for reporting and evaluation. For example, we deployed an “Ambulatory Referral to DAC” order that staff could submit, and it would populate a worklist that the DACs would use to follow up with patients. We established DAC specific work areas and communication pathways. For the embedded model, we created dedicated clinic spaces where the DAC would sit and have enough space to interact with patients. Similarly, for the central model, we set up offices where the DACs could perform phone outreach. To support the DACs, we developed scripts that described the process of enrolling in the portal. Given the multilingual nature of the program, we worked with HIT teams to set up phone trees that would allow for DACs to connect patients with DACs that spoke the patient’s preferred language. There was also interpreter support available if the patient’s preferred language did not match the DAC’s spoken language. Clinics were also given postcards and flyers to advertise the program.
Since this was a new role, we did not have clear hiring criteria or an onboarding process. We had to iterate and review external resources to determine the appropriate skill sets for a DAC. We prioritized DACs that spoke one of our top 6 languages. All candidates completed a language proficiency assessment in both English and their second language. Using a third-party assessment for non-English language proficiency helped ensure our DACs met the linguistic needs of our patients. DACs were expected to have some technology knowledge (ie, different devices, Microsoft Office), but not expected to be technically savvy. We were able to give DACs technical training as needed. We emphasized candidates who had strong customer service and communication skills. As the program developed, we established clear hiring criteria and could be explicit with new DACs about their expected daily work. Upon hiring, they went through a 2-week onboarding process where they received training on the portal, EHR, and workflows.
Program evaluation relied on 2 primary metrics: outreach and portal enrollment. We tracked the total number of unique patients the DAC outreached to either in-person or via telephone call depending on the model. Of those patients that they were able to reach, we tracked their portal enrollment rate. All data were stratified by race, ethnicity, age, and language to measure equity. From May 2021 to November 2022, the DACs outreached to 16 045 patients. Of the 13 413 patients they reached, they successfully enrolled 8193 (61%) patients in the patient portal ( ). Most patients were of Other race and Hispanic ethnicity. About 2854 (89%) of the patients who self-reported Other race also identified as Hispanic. We did not have more granular data on patients who identified as Other race. In terms of language, we enrolled mostly English-speaking (44%) and Spanish-speaking patients (44%). Using our embedded model, we increased portal enrollment across 7 clinics with a dedicated DAC (mean increase: 21.3%, standard deviation: 9.2%) ( ). For example, from August 2021 to November 2022, clinic A increased portal enrollment from 42% before the DAC program to 74% after the program began. Additionally, we assessed the clinic experience and patient experience. For clinic experience, we delivered a survey to 6 of our embedded clinics. Most clinics responded that the DAC program improved their ability to care for patients and clinical teams responded well to the program ( ). We also surveyed 26 patients about their experience with the program. Overall patients responded positively. They described feeling more confident about using the portal and felt that it would make it easier to manage their care.
Varied model success The central model was more challenging to deploy. There was lower than expected referral volume. Putting in a non-clinical referral order is not a priority for clinicians, which limited the number of patients referred to the program. Additionally, supporting patients over the phone proved to be difficult. The DACs found that when “cold calling”, patients were unavailable to discuss the portal, or it was challenging to walk them through enrollment. The central model worked better for large enrollment campaign efforts which involved receiving patient lists from clinics or mailing letters followed by telephone calls. Data needs Data requirements for planning and evaluation related to portal enrollment and DAC metrics required iteration. When the program went live, we initially captured program metrics by having the DACs input their activities into a shared Excel document. We did this as we were not sure what the data tracking needs would be. This introduced inaccuracies and an inability to easily track productivity and operational metrics. As we learned what documentation and tracking were necessary, we worked with HIT teams to develop EHR integrated data capture tools, which facilitated consistent reporting. Program funding While prior efforts to implement digital navigation were limited by a lack of funding, we secured organizational funding for the program. The DAC program was one pillar of a larger organizational effort focused on health equity, United Against Racism.
The central model was more challenging to deploy. There was lower than expected referral volume. Putting in a non-clinical referral order is not a priority for clinicians, which limited the number of patients referred to the program. Additionally, supporting patients over the phone proved to be difficult. The DACs found that when “cold calling”, patients were unavailable to discuss the portal, or it was challenging to walk them through enrollment. The central model worked better for large enrollment campaign efforts which involved receiving patient lists from clinics or mailing letters followed by telephone calls.
Data requirements for planning and evaluation related to portal enrollment and DAC metrics required iteration. When the program went live, we initially captured program metrics by having the DACs input their activities into a shared Excel document. We did this as we were not sure what the data tracking needs would be. This introduced inaccuracies and an inability to easily track productivity and operational metrics. As we learned what documentation and tracking were necessary, we worked with HIT teams to develop EHR integrated data capture tools, which facilitated consistent reporting.
While prior efforts to implement digital navigation were limited by a lack of funding, we secured organizational funding for the program. The DAC program was one pillar of a larger organizational effort focused on health equity, United Against Racism.
For organizations implementing a digital navigation program, we would determine scope of the digital navigation, establish key digital equity metrics, and ensure stakeholder buy-in at the clinic level. Determine program scope and structure Use existing digital health data (ie, portal enrollment) to identify digital disparities and determine the need for digital navigation. We suggest piloting and iterating before broadly hiring staff and expanding. Additionally, develop an operational training document for DACs to onboard, which includes scripting and portal and EHR training. Establish key metrics Identify key metrics and how you will track them ahead of launch, which is ideally automated as much as possible. We suggest reviewing prior work to determine program metrics. While there is no consensus on digital access metrics, we chose outreach and enrollment as our primary metrics. As the program progresses, DACs can focus on engagement with the portal (eg, sending messages, requesting refills, having telehealth visits). Programs can use preliminary data to adjust models. For example, when the central model was not yielding consistent portal enrollment, we pivoted some DACs to other clinical sites (eg, emergency department) and models (developed a hybrid model where the DAC splits their time between being onsite and performing phone outreach). Maintain stakeholder engagement Since the DACs are deployed at each clinic, but managed by a central administrative team, it is critical to delineate communication channels and roles for central leaders and local champions. We would recommend setting up a DAC program workgroup of key stakeholders to work through the implementation process and discuss issues as they arise both ahead of launch and after launching until the program is stable.
Use existing digital health data (ie, portal enrollment) to identify digital disparities and determine the need for digital navigation. We suggest piloting and iterating before broadly hiring staff and expanding. Additionally, develop an operational training document for DACs to onboard, which includes scripting and portal and EHR training.
Identify key metrics and how you will track them ahead of launch, which is ideally automated as much as possible. We suggest reviewing prior work to determine program metrics. While there is no consensus on digital access metrics, we chose outreach and enrollment as our primary metrics. As the program progresses, DACs can focus on engagement with the portal (eg, sending messages, requesting refills, having telehealth visits). Programs can use preliminary data to adjust models. For example, when the central model was not yielding consistent portal enrollment, we pivoted some DACs to other clinical sites (eg, emergency department) and models (developed a hybrid model where the DAC splits their time between being onsite and performing phone outreach).
Since the DACs are deployed at each clinic, but managed by a central administrative team, it is critical to delineate communication channels and roles for central leaders and local champions. We would recommend setting up a DAC program workgroup of key stakeholders to work through the implementation process and discuss issues as they arise both ahead of launch and after launching until the program is stable.
The DAC program represents our organization’s focus on digital health equity as critical to our mission. The program addresses certain key factors related to digital health equity (eg, digital literacy), but does not address all multilevel drivers (eg, broadband access/affordability, clinician-level factors). Future work should focus on the implications of digital navigation on care quality and clinical outcomes and extending digital navigation to other digital tools (ie, continuous glucose monitoring, remote blood pressure monitoring). As the healthcare system becomes increasingly digital, organizations can support patient portal enrollment, a key part of digital health equity, by creating and prioritizing digital navigation programs.
ocae104_Supplementary_Data
|
Automated orthodontic diagnosis via self-supervised learning and multi-attribute classification using lateral cephalograms | edb26ff4-8623-4217-b9c5-b4c6163a910e | 11792313 | Dentistry[mh] | Malocclusion, also known as dental misalignment, refers to the improper positioning of teeth or incorrect occlusal relationship between the upper and lower dental arches . As reported by the World Dental Federation, malocclusion can significantly impact patients’ daily lives, increases the risk of developing dental caries and periodontal diseases. In severe cases, it can impair essential oral functions like speech, chewing, and swallowing, potentially causing psychological health issues . It is the third most common oral health issue following dental caries and periodontal disease, with a global prevalence of 56%, which underscores the critical need for prevention and treatment of malocclusion to improve quality of life and alleviate economic burdens . Many studies demonstrate that early diagnosis and intervention can significantly reduce the severity of future malocclusions, thereby lowering the complexity of later orthodontic treatment . Lateral cephalograms are widely used imaging tools for diagnosing malocclusions, treatment planning, and efficacy evaluation . It provides a two-dimensional view of the skull’s side profile, including the teeth, jaw, soft tissues, cervical vertebrae and airway, offering detailed insights into the craniofacial structure in a single image . Through the analysis of lateral cephalograms, doctors can assess the degree of skeletal and dental malocclusions in patients, enabling them to formulate appropriate treatment plans . The diagnosis of skeletal malocclusions determines whether orthodontic treatment, camouflage treatment, or orthognathic surgery is necessary, while dental malocclusion diagnosis is closely related to specific treatment plans . However, the conventional analysis process of lateral cephalograms is time-consuming, labor-intensive, and can be quite inefficient, especially facing the scenery of population screening. The diagnostic reliability of lateral cephalograms depends on the experience of dentists . With the growing demand for orthodontic treatment, there is a notable shortage of qualified orthodontists, and the quality of diagnosis and treatment varies significantly across different regions . This disparity greatly limits the effectiveness of lateral cephalograms as a diagnostic tool . With the rapid advancement of artificial intelligence (AI), there is a growing interest in automated orthodontic diagnosis compared to manual annotation by clinicians . Several methods based on AI have been introduced to streamline the diagnosis process and improve efficiency in orthodontic assessments using lateral cephalograms, primarily categorized into two types: the landmark-based lateral cephalograms analysis and the direct classification on lateral cephalograms . Automated landmark-based lateral cephalogram analysis methods hold significant utility in orthodontic diagnostics, offering efficient computational measurements against standard values for diagnostic classifications. However, these methods are susceptible to various sources of error, which can propagate through a series of calculations, making the assessment more complex and less straightforward. This error propagation can be difficult to evaluate, further complicating the reliability of the diagnostic outcomes . Furthermore, in clinical measurements, using different measurement criteria may lead to contradictory diagnostic results, potentially limiting the clinical applicability of landmark-based methods . While the direct classification method for lateral cephalograms aims to increase diagnostic reliability by minimizing intermediate steps. Kim et al found that the direct classification model based on deep convolutional neural network was superior to automatic landmark-based method in sagittal skeletal classification . Yu et al. proposed a convolution neural network with transfer learning and data augmentation techniques for single skeletal classification, with an accuracy of 90.50% . Nan et al. adopted the Densenet-121 network to obtain the automatic classification of the sagittal skeletal pattern in children, with the sensitivity, specificity, and accuracy of 83.99, 92.44, and 90.33%, respectively . Yim et al. employed a DenseNet-169 network as the classifier, and adopted the gradient-weighted class activation mapping to visualize the extracted features for automated orthodontic diagnosis, with the mean accuracy of 90.34% . Li et al compared the performance on the classification of sagittal skeletal patterns using four different type of convolution neural network including Visual Geometry Group (VGG) , GoogLeNet , Residual networks (ResNet) , and DenseNet161 , with a best accuracy of 89.58% . The above studies only included 1–3 classifications, which is difficult to meet the clinical needs. Chang et al. extended the diagnostic classifications to eight categories by using the DenseNet-121 network . The accuracy of five diagnostic classifications were 80–90%,and the accuracy of three classifications were 70–80%, which needs to be further improved . Despite these advancements, existing direct classification methods often encounter performance biases due to imbalanced sample distributions among different attributes or classes in lateral cephalograms, which is a common issue in clinical settings. Moreover, most existing methods primarily concentrate on single-attribute classification, addressing specific orthodontic diagnostic requirements. However, the craniofacial structures generally exhibit a compensatory relationship, and there are potential correlations between different attributes or classes for orthodontic diagnosis. Also, compared to multi-attribute classification tasks, training multiple single-attribute models results in extended training times and slow iteration updates, which limits their suitability for comprehensive orthodontic diagnosis in clinical settings. To address these challenges, this study proposes a novel deep learning framework, named SPMA network, for automated orthodontic diagnosis via self-supervised pre-training and multi-attribute classification using lateral cephalograms. A model weight initialization method based on masked image modeling is proposed. By pre-training the model on unlabeled data from multiple centers, it captures robust feature representations with cross-domain data distributions. A multi-attribute joint optimization network is designed, incorporating clinical prior knowledge to optimize multiple attribute classification tasks simultaneously, leveraging complementary information between different attributes to enhance performance. In clinical practice, orthodontists utilize lateral cephalograms to assess both skeletal and dental characteristics of patients, aiding in diagnosis and treatment planning. The proposed method incorporates eight specific indicators, which comprehensively describe these features and provide qualitative support for clinicians. The contributions of this work are summarized as follows: A pre-training method based on multi-center lateral cephalograms was proposed, employing masked image modeling for self-supervised learning from diverse image domains, aiming to enhance model generalization when facing clinical data domain shifts. A multi-attribute classification network was proposed that optimizes parameters effectively by incorporating prior correlations between attributes, utilizing complementary information to improve performance in multi-attribute classification. Comprehensive evaluation on public and local clinical datasets demonstrated the superiority of this study over existing state-of-the-art (SOTA) methods, achieving a mean accuracy of 0.9002 and providing a potential tool for automated orthodontic diagnosis. Evaluation metrics For a comprehensive evaluation of the proposed SPMA framework, we employed various evaluation metrics, including the exact match ratio (MR), accuracy (Acc), and Hamming loss (HL) for the multi-attribute classification task. These metrics can be expressed using the following formulas: MR: This is a strict metric that considers a sample prediction correct only if all attributes are predicted correctly. Assuming we have n samples, where [12pt]{minimal} $$y_i$$ y i is the true label for the i th sample and [12pt]{minimal} $$_i$$ y ^ i is the predicted label for the i th sample, MR can be expressed as: 1 [12pt]{minimal} $$ {} = ^{n} I(y_i = _i)}. $$ MR = 1 n ∑ i = 1 n I ( y i = y ^ i ) . Acc: This is a commonly used classification metric, representing the proportion of correctly predicted samples among the total samples. Assuming we have n samples and m attributes, where [12pt]{minimal} $$y_{ij}$$ y ij is the true label for the j th attribute of the i th sample and [12pt]{minimal} $$_{ij}$$ y ^ ij is the predicted label for the j th attribute of the i th sample, Acc can be expressed as: 2 [12pt]{minimal} $$ = ^{n} _{j=1}^{m} I(y_{ij} = _{ij})}. $$ Acc = 1 n × m ∑ i = 1 n ∑ j = 1 m I ( y ij = y ^ ij ) . HL: This is a metric for multi-label classification, representing the proportion of incorrectly predicted labels among all labels. Assuming we have n samples and m attributes, where [12pt]{minimal} $$y_{ij}$$ y ij is the true label for the j th attribute of the i th sample and [12pt]{minimal} $$_{ij}$$ y ^ ij is the predicted label for the j th attribute of the i th sample, HL can be expressed as: 3 [12pt]{minimal} $$ = ^{n} _{j=1}^{m} I(y_{ij} _{ij})}, $$ HL = 1 n × m ∑ i = 1 n ∑ j = 1 m I ( y ij ≠ y ^ ij ) , where [12pt]{minimal} $$I$$ I ( · ) is the indicator function, which takes the value of 1 when the condition inside the parentheses is satisfied, and 0 otherwise. Experimental results In this study, we conducted a series of experiments to validate the effectiveness of each component of our proposed multi-attribute classification network, the SPMA network. The SPMA network consists of a Vision Transformer (ViT) based encoder and a multi-head task network, specifically designed for automated orthodontic diagnosis. The training process of our model is divided into two stages to avoid redundancy. In the first stage, we use a self-supervised learning approach called masked image modeling for image reconstruction tasks. This process allows us to obtain pre-trained weights for the encoder. This stage lasts for 400 epochs, with a base learning rate set at 1.5e−4 and a warmup learning rate set at 1e−6. We employ a cosine scheduler for learning rate adjustment and use AdamW as the optimizer. The batch size is set to 64, and the image size is scaled to 224 [12pt]{minimal} $$$$ × 224. In the second stage, we use the encoder trained in the first stage as the feature encoder and train the entire SPMA network. This training lasts for 30 epochs, with a base learning rate of 0.001. We use a step learning rate adjustment strategy, where the learning rate is reduced by a factor of 10 every 10 epochs. The Stochastic Gradient Descent optimizer is used for training, with a batch size of 128. The images are scaled to 224 [12pt]{minimal} $$$$ × 224. This two-stage training process ensures the robustness and effectiveness of our proposed SPMA network. All training process were conducted using two NVIDIA GeForce RTX 4090 GPUs((NVIDIA Corporation: Santa Clara, CA, USA). The training loss of the self-supervised learning strategy and the visualization of the extracted features using the FeatUp module are demonstrated in Fig. . The results of the proposed SPMA network on multiple evaluation metrics, including MR, mean Acc, and HL, are displayed in Table . To the best of our knowledge, we are the first to apply multi-attribute classification in the task of automated orthodontic diagnosis using lateral cephalograms. Ablation study We performed ablation studies to understand the contribution of each part of our method. Specifically, we compared the performance of the encoder network obtained through self-supervised learning with that of an encoder network trained from scratch. We also conducted ablation studies on the multi-attribute classification task network and single-attribute classification task to determine the contribution of the multi-attribute joint optimization. The baseline network is consisted of a same encoder network training from scratch and single-attribute classification task network for each attribute. The experimental results are presented in Table , where “+SSL”denotes the inclusion of the self-supervised learning strategy, and “+MAC”signifies the integration of the multi-attribute joint optimization module. And the eight attributes are maxillary anteroposterior position (AP-Max), mandibular anteroposterior position (AP-Mand), sagittal skeletal facial pattern (SKFP), vertical skeletal facial pattern (VSFP), inclination of upper incisors (Incl-U1), inclination of lower incisors (Incl-L1), anteroposterior position of upper incisors (AP-U1), and anteroposterior position of lower incisors (AP-L1). Comparative study Furthermore, we compared our proposed SPMA network with existing advanced automated orthodontic diagnosis methods using lateral cephalograms, including modified DenseNet , DenseNet-169 , and DenseNet 121 , to validate the effectiveness and advantages of our model, particularly in the context of a mixed multi-center dataset. The results of these experiments, presented in Table , demonstrate the superior performance of the SPMA network across various metrics. In addition, the misclassification rates were calculated and are provided in Supplementary Table 1, further supporting the evaluation of our model’s performance. To visualize the activated regions during misclassifications, heatmaps are included in Supplementary Fig. 1. The attribute marked with “–” indicates that the data were not reported in the corresponding study. For a clearer representation of the model’s performance, the receiver operating characteristic (ROC) curves of the SPMA network and other SOTA methods on two metrics (SKFP and VSFP) are shown in Fig. . The Chi-squared test result of the ROC curve on SKFP is 129.17, with a p-value of [12pt]{minimal} $$< 0.00001$$ < 0.00001 , and the Chi-squared test result on VSFP is 130.71, with a p-value of [12pt]{minimal} $$< 0.00001$$ < 0.00001 . These results suggest a significant deviation from the null hypothesis, indicating that the SPMA model’s predictions are unlikely to have occurred by chance. This statistical significance affirms the reliability of the SPMA model’s performance in classifying the data correctly. It is important to note that the Chi-squared tests were conducted specifically on the SPMA model’s predictions compared to the ground truth (true labels), not directly comparing it to other models. The comparisons with other models, including the evaluation of eight parameters, are provided through quantitative performance metrics as shown in Table , and visual comparisons through the ROC curves are shown in Fig. . Based on the performance results presented above, the SPMA model demonstrates superior performance compared to the other models. However, it is important to clarify that, due to the absence of performance metrics at varying thresholds in the models from other studies, a precise statistical comparison was not available. For a comprehensive evaluation of the proposed SPMA framework, we employed various evaluation metrics, including the exact match ratio (MR), accuracy (Acc), and Hamming loss (HL) for the multi-attribute classification task. These metrics can be expressed using the following formulas: MR: This is a strict metric that considers a sample prediction correct only if all attributes are predicted correctly. Assuming we have n samples, where [12pt]{minimal} $$y_i$$ y i is the true label for the i th sample and [12pt]{minimal} $$_i$$ y ^ i is the predicted label for the i th sample, MR can be expressed as: 1 [12pt]{minimal} $$ {} = ^{n} I(y_i = _i)}. $$ MR = 1 n ∑ i = 1 n I ( y i = y ^ i ) . Acc: This is a commonly used classification metric, representing the proportion of correctly predicted samples among the total samples. Assuming we have n samples and m attributes, where [12pt]{minimal} $$y_{ij}$$ y ij is the true label for the j th attribute of the i th sample and [12pt]{minimal} $$_{ij}$$ y ^ ij is the predicted label for the j th attribute of the i th sample, Acc can be expressed as: 2 [12pt]{minimal} $$ = ^{n} _{j=1}^{m} I(y_{ij} = _{ij})}. $$ Acc = 1 n × m ∑ i = 1 n ∑ j = 1 m I ( y ij = y ^ ij ) . HL: This is a metric for multi-label classification, representing the proportion of incorrectly predicted labels among all labels. Assuming we have n samples and m attributes, where [12pt]{minimal} $$y_{ij}$$ y ij is the true label for the j th attribute of the i th sample and [12pt]{minimal} $$_{ij}$$ y ^ ij is the predicted label for the j th attribute of the i th sample, HL can be expressed as: 3 [12pt]{minimal} $$ = ^{n} _{j=1}^{m} I(y_{ij} _{ij})}, $$ HL = 1 n × m ∑ i = 1 n ∑ j = 1 m I ( y ij ≠ y ^ ij ) , where [12pt]{minimal} $$I$$ I ( · ) is the indicator function, which takes the value of 1 when the condition inside the parentheses is satisfied, and 0 otherwise. In this study, we conducted a series of experiments to validate the effectiveness of each component of our proposed multi-attribute classification network, the SPMA network. The SPMA network consists of a Vision Transformer (ViT) based encoder and a multi-head task network, specifically designed for automated orthodontic diagnosis. The training process of our model is divided into two stages to avoid redundancy. In the first stage, we use a self-supervised learning approach called masked image modeling for image reconstruction tasks. This process allows us to obtain pre-trained weights for the encoder. This stage lasts for 400 epochs, with a base learning rate set at 1.5e−4 and a warmup learning rate set at 1e−6. We employ a cosine scheduler for learning rate adjustment and use AdamW as the optimizer. The batch size is set to 64, and the image size is scaled to 224 [12pt]{minimal} $$$$ × 224. In the second stage, we use the encoder trained in the first stage as the feature encoder and train the entire SPMA network. This training lasts for 30 epochs, with a base learning rate of 0.001. We use a step learning rate adjustment strategy, where the learning rate is reduced by a factor of 10 every 10 epochs. The Stochastic Gradient Descent optimizer is used for training, with a batch size of 128. The images are scaled to 224 [12pt]{minimal} $$$$ × 224. This two-stage training process ensures the robustness and effectiveness of our proposed SPMA network. All training process were conducted using two NVIDIA GeForce RTX 4090 GPUs((NVIDIA Corporation: Santa Clara, CA, USA). The training loss of the self-supervised learning strategy and the visualization of the extracted features using the FeatUp module are demonstrated in Fig. . The results of the proposed SPMA network on multiple evaluation metrics, including MR, mean Acc, and HL, are displayed in Table . To the best of our knowledge, we are the first to apply multi-attribute classification in the task of automated orthodontic diagnosis using lateral cephalograms. Ablation study We performed ablation studies to understand the contribution of each part of our method. Specifically, we compared the performance of the encoder network obtained through self-supervised learning with that of an encoder network trained from scratch. We also conducted ablation studies on the multi-attribute classification task network and single-attribute classification task to determine the contribution of the multi-attribute joint optimization. The baseline network is consisted of a same encoder network training from scratch and single-attribute classification task network for each attribute. The experimental results are presented in Table , where “+SSL”denotes the inclusion of the self-supervised learning strategy, and “+MAC”signifies the integration of the multi-attribute joint optimization module. And the eight attributes are maxillary anteroposterior position (AP-Max), mandibular anteroposterior position (AP-Mand), sagittal skeletal facial pattern (SKFP), vertical skeletal facial pattern (VSFP), inclination of upper incisors (Incl-U1), inclination of lower incisors (Incl-L1), anteroposterior position of upper incisors (AP-U1), and anteroposterior position of lower incisors (AP-L1). Comparative study Furthermore, we compared our proposed SPMA network with existing advanced automated orthodontic diagnosis methods using lateral cephalograms, including modified DenseNet , DenseNet-169 , and DenseNet 121 , to validate the effectiveness and advantages of our model, particularly in the context of a mixed multi-center dataset. The results of these experiments, presented in Table , demonstrate the superior performance of the SPMA network across various metrics. In addition, the misclassification rates were calculated and are provided in Supplementary Table 1, further supporting the evaluation of our model’s performance. To visualize the activated regions during misclassifications, heatmaps are included in Supplementary Fig. 1. The attribute marked with “–” indicates that the data were not reported in the corresponding study. For a clearer representation of the model’s performance, the receiver operating characteristic (ROC) curves of the SPMA network and other SOTA methods on two metrics (SKFP and VSFP) are shown in Fig. . The Chi-squared test result of the ROC curve on SKFP is 129.17, with a p-value of [12pt]{minimal} $$< 0.00001$$ < 0.00001 , and the Chi-squared test result on VSFP is 130.71, with a p-value of [12pt]{minimal} $$< 0.00001$$ < 0.00001 . These results suggest a significant deviation from the null hypothesis, indicating that the SPMA model’s predictions are unlikely to have occurred by chance. This statistical significance affirms the reliability of the SPMA model’s performance in classifying the data correctly. It is important to note that the Chi-squared tests were conducted specifically on the SPMA model’s predictions compared to the ground truth (true labels), not directly comparing it to other models. The comparisons with other models, including the evaluation of eight parameters, are provided through quantitative performance metrics as shown in Table , and visual comparisons through the ROC curves are shown in Fig. . Based on the performance results presented above, the SPMA model demonstrates superior performance compared to the other models. However, it is important to clarify that, due to the absence of performance metrics at varying thresholds in the models from other studies, a precise statistical comparison was not available. We performed ablation studies to understand the contribution of each part of our method. Specifically, we compared the performance of the encoder network obtained through self-supervised learning with that of an encoder network trained from scratch. We also conducted ablation studies on the multi-attribute classification task network and single-attribute classification task to determine the contribution of the multi-attribute joint optimization. The baseline network is consisted of a same encoder network training from scratch and single-attribute classification task network for each attribute. The experimental results are presented in Table , where “+SSL”denotes the inclusion of the self-supervised learning strategy, and “+MAC”signifies the integration of the multi-attribute joint optimization module. And the eight attributes are maxillary anteroposterior position (AP-Max), mandibular anteroposterior position (AP-Mand), sagittal skeletal facial pattern (SKFP), vertical skeletal facial pattern (VSFP), inclination of upper incisors (Incl-U1), inclination of lower incisors (Incl-L1), anteroposterior position of upper incisors (AP-U1), and anteroposterior position of lower incisors (AP-L1). Furthermore, we compared our proposed SPMA network with existing advanced automated orthodontic diagnosis methods using lateral cephalograms, including modified DenseNet , DenseNet-169 , and DenseNet 121 , to validate the effectiveness and advantages of our model, particularly in the context of a mixed multi-center dataset. The results of these experiments, presented in Table , demonstrate the superior performance of the SPMA network across various metrics. In addition, the misclassification rates were calculated and are provided in Supplementary Table 1, further supporting the evaluation of our model’s performance. To visualize the activated regions during misclassifications, heatmaps are included in Supplementary Fig. 1. The attribute marked with “–” indicates that the data were not reported in the corresponding study. For a clearer representation of the model’s performance, the receiver operating characteristic (ROC) curves of the SPMA network and other SOTA methods on two metrics (SKFP and VSFP) are shown in Fig. . The Chi-squared test result of the ROC curve on SKFP is 129.17, with a p-value of [12pt]{minimal} $$< 0.00001$$ < 0.00001 , and the Chi-squared test result on VSFP is 130.71, with a p-value of [12pt]{minimal} $$< 0.00001$$ < 0.00001 . These results suggest a significant deviation from the null hypothesis, indicating that the SPMA model’s predictions are unlikely to have occurred by chance. This statistical significance affirms the reliability of the SPMA model’s performance in classifying the data correctly. It is important to note that the Chi-squared tests were conducted specifically on the SPMA model’s predictions compared to the ground truth (true labels), not directly comparing it to other models. The comparisons with other models, including the evaluation of eight parameters, are provided through quantitative performance metrics as shown in Table , and visual comparisons through the ROC curves are shown in Fig. . Based on the performance results presented above, the SPMA model demonstrates superior performance compared to the other models. However, it is important to clarify that, due to the absence of performance metrics at varying thresholds in the models from other studies, a precise statistical comparison was not available. The study introduces a novel deep learning framework, the SPMA network, specifically designed for automated orthodontic diagnosis using lateral cephalograms. This framework addresses several challenges in orthodontic diagnosis, such as domain shifts in clinical data and the need for effective multi-attribute classification. One significant contribution of this work is the proposed pre-training method based on multi-center lateral cephalograms. This method leverages masked image modeling for self-supervised learning from diverse image domains. By pre-training on unlabeled data from multiple centers, the model captures robust feature representations that generalize well across different data distributions. This approach enhances the model’s ability to handle domain shifts in clinical data, a common challenge in real-world orthodontic diagnosis scenarios. Furthermore, the study introduces a multi-attribute classification network that optimizes parameters effectively by incorporating prior correlations between attributes. Clinically, while the 8 classification criteria used to describe craniofacial features are relatively independent, there are inherent relationships among them. Based on this, we introduced a multi-attribute classification network. This network architecture utilizes complementary information between different attributes, enhancing the overall performance of multi-attribute classification tasks. By jointly optimizing multiple attribute classification tasks, the proposed network improves diagnostic accuracy and provides a more comprehensive understanding of orthodontic conditions. The comprehensive evaluation conducted on both public and local clinical datasets demonstrates the superiority of the SPMA network over existing SOTA methods. The achieved mean accuracy of 0.9002 highlights the effectiveness of the proposed framework in automated orthodontic diagnosis. Since each classification has its own clinical significance, the aim of the study is to improve the performance of each individual classification. As shown in Table , compared to single-task training, the performance of each classification improved within this model. The lower performance in single-task training may be attributed to data imbalance. To achieve balanced improvement across all classifications, multi-attribute classification training for the 8 types is essential. Additionally, an error analysis was performed, revealing that most misclassifications occurred in borderline cases where the diagnostic features were less distinct. After conducting further analysis on these misclassified cases, we found that the probability values between the misclassified categories were quite close. This suggests that our model could potentially identify samples prone to confusion by incorporating a calculation of the probability difference between categories. By flagging these cases for human review, we can reduce the impact of diagnostic inaccuracies and improve overall diagnostic accuracy. These results suggest that the SPMA network has the potential to serve as a valuable tool for orthodontists, assisting them in making accurate and efficient diagnostic decisions. Clinically, the eight indicators predicted by the proposed method comprehensively describe key craniofacial characteristics. AP-Max, AP-Mand, and SKFP reflect the sagittal development of the maxilla and mandible, as well as the relationship between them. SKFP classifications of Class II and Class III indicate the presence of skeletal deformities, necessitating more complex treatment approaches such as orthopedic correction, camouflage treatment, or orthognathic surgery compared to Class I cases. AP-Max and AP-Mand specifically illustrate the developmental status of the maxilla and mandible. The protrusion or retrusion of these structures dictates the required extraction sites and orthognathic procedures. VSFP indicates the vertical development of the jaws; severe hypodivergent or hyperdivergent cases may require orthognathic surgery. This diagnosis also influences the decision-making process for extraction plans; hyperdivergent cases generally support extraction, while hypodivergent cases require more careful consideration. Incl-U1, Incl-L1, AP-U1, and AP-L1 describe the inclination and protrusion of the upper and lower incisors, which directly affect the decision to pursue extraction-based treatment. Besides, the integration of automated orthodontic diagnosis through self-supervised pre-training and multi-attribute classification using lateral cephalograms presents substantial economic and operational benefits for public health. By reducing operation time and encapsulating the expertise of seasoned orthodontists, this approach enhances diagnostic efficiency and accuracy while minimizing errors, particularly among less experienced practitioners. Although due to its black-box nature, the underlying diagnostic logic lacks transparency, which may lead to potential misjudgments, especially in patients with ambiguous classification boundaries.The multi-attribute analysis delivers a comprehensive evaluation, swiftly processing large volumes of influential data, which is invaluable for screening, case management, and generating rich data sources for orthodontic research. These data can support epidemiological studies and investigations into disease mechanisms, ultimately advancing the orthodontic field. Additionally, the application of multi-attribute classification provides new insights into bridging the gap between technological advancements and clinical practice. Clinically, it has been observed that there are inherent relationships between skeletal and dental characteristics. By leveraging AI, particularly the multi-attribute classification approach, we aim to incorporate these clinical experiences and patterns to enhance classification performance. The results have indeed confirmed the effectiveness of this method. Collectively, these improvements lead to more efficient and cost-effective orthodontic care, with broader implications for public health systems. Overall, the SPMA network offers a promising approach to automated orthodontic diagnosis, combining self-supervised pre-training with multi-attribute classification to achieve superior performance. Future research directions may include further validation on larger and more diverse datasets, exploration of additional clinical attributes, and integration of real-time diagnostic support tools based on the developed framework. In conclusion, this study presents a novel deep learning framework, the SPMA network, tailored for automated orthodontic diagnosis using lateral cephalograms with a best MR score of 71.38%, an accuracy score of 90.02%, and a HL loss of 0.0425%. Through innovative strategies including masked image modeling for self-supervised pre-training and multi-attribute joint optimization, the SPMA network addresses key challenges in orthodontic diagnosis, including domain shifts in clinical data and effective integration of clinical prior knowledge. Overall, the SPMA network represents a promising innovation in orthodontics, providing an automated solution for diagnosis. It has the potential to significantly benefit both orthodontic practitioners and patients. Dataset construction This study constructed a new dataset comprising 3310 lateral cephalograms along with their multi-attribute classification labels. The images were retrospectively selected from lateral cephalograms obtained at Beijing Stomatological Hospital between January 2015 and December 2021. The images were acquired using a Kodak 8000C dental X-ray machine (Carestream Health, Canada) with the following parameters: voltage 80 kV, current 10 mA, and X-ray exposure time of 0.5 s. Inclusion criteria for the dataset were age greater than 14 years, while exclusion criteria included motion artifacts, facial trauma, and missing incisors. The participants’ ages ranged from 14 to 55 years, with a mean age of [12pt]{minimal} $$24.5 8.3$$ 24.5 ± 8.3 years. Notably, the presence of third molars was not documented. The lateral cephalograms were stored in Tag Image File Format with an image resolution of 1360 [12pt]{minimal} $$$$ × 1840 pixels. The dataset covered features from different skeletal and dental types. Classification labels were assigned based on 8 commonly used diagnostic criteria, including AP-Max, AP-Mand, SKFP, VSFP, Incl-U1, Incl-L1, AP-U1, and AP-L1. Clinically, these 8 criteria are usually further subdivided into 3 subcategories to represent their specific subtypes. In this study, we aim to classify these subtypes across all 8 criteria simultaneously, thus improving efficiency and enhancing complementary information between the indicators to ultimately improve the model’s learning performance. These classifications were derived from a comprehensive analysis of 8 cephalometric measurement items, summarized using the Steiner analysis and the Tweed analysis . The specific measurements included SNA, SNB, ANB, SN-GoGn, U1-SN, IMPA, U1-NA, and L1-NB. Additionally, 324 lateral cephalograms from the publicly available 2015 Institute of Electrical and Electronics Engineers (IEEE) International Symposium on Biomedical Imaging challenge dataset were selected based on the study’s inclusion criteria to construct a multi-center dataset. Two orthodontists with 8 and 5 years of experience manually measured the craniofacial features of the two datasets, and consensus labels were obtained. To ensure that the orthodontists’ assessments were not biased, both were blinded to each other’s measurements and to any prior patient information, allowing for independent evaluations. These two datasets together form a mixed multi-center dataset used for the performance evaluation of the methods in this study. Figure displays example images from the two datasets, illustrating their distinctions. The datasets are divided into training, validation, and testing sets at a ratio of 7:2:1. The detailed information about the data distribution is presented in Table . This combined dataset provides a comprehensive and diverse set of data, enhancing the robustness and generalizability of the study’s findings. Data augmentation In consideration of the significant role of geometric information in orthodontic diagnosis within lateral cephalograms, four image augmentation techniques were applied to lateral cephalograms to expand the data scale without altering the geometric information in the images. As shown in Fig. , these data augmentation techniques include random rotation by 10 degrees (Fig. a), color jittering with a brightness shift of 0.2, contrast shift of 0.2, saturation shift of 0.2, and hue shift of 0.1 (Fig. b), random affine transformation with a translation of 0.1 in both x and y directions (Fig. c), Gaussian blur with a kernel size of 3 (Fig. d). By employing these transformations, we aimed to reduce the potential bias and performance issues caused by imbalanced data distribution. The pipeline of the SPMA framework The proposed SPMA network comprises a ViT-based encoder and a multi-head task network for automated orthodontic diagnosis. The encoder initializes its weights based on a self-supervised learning task of image reconstruction. The multi-head task network achieves classification of various attributes in orthodontic diagnosis through joint optimization using multiple fully connected layers tailored for different attributes. The pipeline of the SPMA network is illustrated in Fig. . Self-supervised pretraining using masked image modeling To obtain a category-independent cross-domain feature representation, we propose a self-supervised learning-based image reconstruction method which aims to learn feature representations from unlabeled image data. The proposed self-supervised pre-training process is shown in Fig. . Initially, we applied a random mask with a mask ratio of 0.75 to the input image, generating a partially masked image which is subsequently divided into several patches. This masking and patching strategy is employed to encourage the model to focus on various parts of the image and to learn robust, spatially diverse features, which is crucial for capturing the underlying structure and relationships within the image. By dividing the image into patches, the model can analyze and reconstruct different regions independently, leading to a more comprehensive and generalized feature representation. To be more specific, let I denote the input image. We use the PatchEmbed module to divide I into N image blocks, each of size [12pt]{minimal} $$P P$$ P × P . We then embed each image block into a D -dimensional vector, where D represents the embedding dimension ( embed_dim ). This can be expressed as: 4 [12pt]{minimal} $$ X = {PatchEmbed}(I) {R}^{N D}. $$ X = PatchEmbed ( I ) ∈ R N × D . Next, we add positional embeddings ( pos_embed ) to the embedded image blocks, resulting in: 5 [12pt]{minimal} $$ X = X + {pos}\_ {R}^{N D}. $$ X = X + pos _ embed ∈ R N × D . Here, [12pt]{minimal} $$ {R}$$ R represents the set of real numbers, and [12pt]{minimal} $$$$ × denotes the Cartesian product. The PatchEmbed function maps the input image I to a matrix X with dimensions [12pt]{minimal} $$N D$$ N × D , where N is the number of image blocks and D is the embedding dimension. Incorporating positional embeddings enriches the embedded features with spatial information, thereby enhancing the model’s representation capabilities. These patches and their positional embeddings are then input into the vision transformer encoder, which is composed of multi-head attention and feedforward neural networks, with add and norm operations applied sequentially. The encoded features are compactly represented and then passed through the ViT-based decoder. Specifically, we pass X through a series of Transformer blocks. Each Transformer block comprises a multi-head attention mechanism and a feedforward network, for each [12pt]{minimal} $$_i$$ Block i , the process can be represented as: 6 [12pt]{minimal} $$ X = {Block}_i(X). $$ X = Block i ( X ) . Finally, we apply a normalization layer and a linear classifier to X , resulting in the final output Y : 7 [12pt]{minimal} $$ Y = {head}( {norm}(X)), $$ Y = head ( norm ( X ) ) , where [12pt]{minimal} $$ {Block}_i$$ Block i represents the i th Transformer block, and [12pt]{minimal} $$ {depth}$$ depth signifies the total number of Transformer blocks in the series. The [12pt]{minimal} $$ {head}$$ head function denotes the linear classifier, and [12pt]{minimal} $$ {norm}$$ norm refers to the normalization layer applied to X . This process concludes the transformation of X through the Transformer architecture, producing the output Y with enhanced features suitable for the image reconstruction task. The output Y of the encoder was passed through the ViT decoder. The decoder, similar to the encoder, consists of multi-head attention and feedforward networks, yet includes a drop path module to enhance the stability and performance of the training process. Finally, the reconstructed image is obtained after processing through the ViT decoder. The reconstruction image exhibits enhanced details and reduced artifacts compared to the original masked image. Multi-attribute classification network After thorough training of the self-supervised learning model, the weights of its encoder part are saved in this study. Based on this encoder, features are extracted to construct a multi-attribute classification network. In this network, the input features Y , derived from the pre-trained encoder weights via self-supervised learning, serve as shared features. These shared features are processed by a network comprising multiple groups of fully connected layers, with each group corresponding to a specific attribute. The output for each attribute is generated as a classification output. We denote the fully connected layer corresponding to the i th attribute as [12pt]{minimal} $$f_i$$ f i . The classification output for the i th attribute, denoted as [12pt]{minimal} $$C_i$$ C i , is then given by: 8 [12pt]{minimal} $$ C_i = f_i(Y), $$ C i = f i ( Y ) , where Y represents the input features, and [12pt]{minimal} $$f_i$$ f i represents the fully connected layer corresponding to the i th attribute. The classification output [12pt]{minimal} $$C_i$$ C i is the output corresponding to the i th attribute. The proposed multi-attribute classification network processes the encoded features, facilitating the simultaneous generation of classification outputs for multiple attributes. This versatility enhances the network’s adaptability across various scenarios, thereby bolstering its applicability in diverse contexts. Loss functions In this study, considering the issue of imbalanced category distributions within various attributes, we adopted Focal Loss as the loss function for intra-attribute class classification . Focal Loss is designed to address class imbalance problems, and its formal expression is as follows: Given that there are C categories, where y represents the true category and p is the model’s predicted probability distribution, the Focal Loss is defined as follows: 9 [12pt]{minimal} $$ {FL}(p_t) = - _t (1 - p_t)^ (p_t), $$ FL ( p t ) = - α t ( 1 - p t ) γ log ( p t ) , where [12pt]{minimal} $$p_t$$ p t is the predicted probability for the true category y , where [12pt]{minimal} $$p_t = p$$ p t = p when [12pt]{minimal} $$y = 1$$ y = 1 and [12pt]{minimal} $$p_t = 1 - p$$ p t = 1 - p when [12pt]{minimal} $$y = 0$$ y = 0 . [12pt]{minimal} $$ _t$$ α t is a balance factor used to adjust the weights of each category, and [12pt]{minimal} $$$$ γ is a tuning factor used to reduce the weight of simple samples and increase the weight of difficult samples. In this study, there are a total of 8 attribute classification tasks. The Focal Loss for each attribute i is denoted as [12pt]{minimal} $$_i$$ FL i , and each attribute has a weight [12pt]{minimal} $$w_i$$ w i . Therefore, the overall loss L of the network can be represented as the weighted average of the Focal Loss for each attribute, given by: 10 [12pt]{minimal} $$ L = ^{m} w_i _{i=1}^{m} w_i text{FL}_i}{ _{i=1}^{m} w_i}, $$ L = ∑ i = 1 m w i ∑ i = 1 m w i t e x t FL i ∑ i = 1 m w i , where m represents the total number of attributes. The weight vector [12pt]{minimal} $$w_i$$ w i is defined based on the importance of different attributes as determined by dentists. This formulation calculates the weighted average of the Focal Losses for each attribute, considering their respective weights in the overall loss of the network. This study constructed a new dataset comprising 3310 lateral cephalograms along with their multi-attribute classification labels. The images were retrospectively selected from lateral cephalograms obtained at Beijing Stomatological Hospital between January 2015 and December 2021. The images were acquired using a Kodak 8000C dental X-ray machine (Carestream Health, Canada) with the following parameters: voltage 80 kV, current 10 mA, and X-ray exposure time of 0.5 s. Inclusion criteria for the dataset were age greater than 14 years, while exclusion criteria included motion artifacts, facial trauma, and missing incisors. The participants’ ages ranged from 14 to 55 years, with a mean age of [12pt]{minimal} $$24.5 8.3$$ 24.5 ± 8.3 years. Notably, the presence of third molars was not documented. The lateral cephalograms were stored in Tag Image File Format with an image resolution of 1360 [12pt]{minimal} $$$$ × 1840 pixels. The dataset covered features from different skeletal and dental types. Classification labels were assigned based on 8 commonly used diagnostic criteria, including AP-Max, AP-Mand, SKFP, VSFP, Incl-U1, Incl-L1, AP-U1, and AP-L1. Clinically, these 8 criteria are usually further subdivided into 3 subcategories to represent their specific subtypes. In this study, we aim to classify these subtypes across all 8 criteria simultaneously, thus improving efficiency and enhancing complementary information between the indicators to ultimately improve the model’s learning performance. These classifications were derived from a comprehensive analysis of 8 cephalometric measurement items, summarized using the Steiner analysis and the Tweed analysis . The specific measurements included SNA, SNB, ANB, SN-GoGn, U1-SN, IMPA, U1-NA, and L1-NB. Additionally, 324 lateral cephalograms from the publicly available 2015 Institute of Electrical and Electronics Engineers (IEEE) International Symposium on Biomedical Imaging challenge dataset were selected based on the study’s inclusion criteria to construct a multi-center dataset. Two orthodontists with 8 and 5 years of experience manually measured the craniofacial features of the two datasets, and consensus labels were obtained. To ensure that the orthodontists’ assessments were not biased, both were blinded to each other’s measurements and to any prior patient information, allowing for independent evaluations. These two datasets together form a mixed multi-center dataset used for the performance evaluation of the methods in this study. Figure displays example images from the two datasets, illustrating their distinctions. The datasets are divided into training, validation, and testing sets at a ratio of 7:2:1. The detailed information about the data distribution is presented in Table . This combined dataset provides a comprehensive and diverse set of data, enhancing the robustness and generalizability of the study’s findings. In consideration of the significant role of geometric information in orthodontic diagnosis within lateral cephalograms, four image augmentation techniques were applied to lateral cephalograms to expand the data scale without altering the geometric information in the images. As shown in Fig. , these data augmentation techniques include random rotation by 10 degrees (Fig. a), color jittering with a brightness shift of 0.2, contrast shift of 0.2, saturation shift of 0.2, and hue shift of 0.1 (Fig. b), random affine transformation with a translation of 0.1 in both x and y directions (Fig. c), Gaussian blur with a kernel size of 3 (Fig. d). By employing these transformations, we aimed to reduce the potential bias and performance issues caused by imbalanced data distribution. The proposed SPMA network comprises a ViT-based encoder and a multi-head task network for automated orthodontic diagnosis. The encoder initializes its weights based on a self-supervised learning task of image reconstruction. The multi-head task network achieves classification of various attributes in orthodontic diagnosis through joint optimization using multiple fully connected layers tailored for different attributes. The pipeline of the SPMA network is illustrated in Fig. . To obtain a category-independent cross-domain feature representation, we propose a self-supervised learning-based image reconstruction method which aims to learn feature representations from unlabeled image data. The proposed self-supervised pre-training process is shown in Fig. . Initially, we applied a random mask with a mask ratio of 0.75 to the input image, generating a partially masked image which is subsequently divided into several patches. This masking and patching strategy is employed to encourage the model to focus on various parts of the image and to learn robust, spatially diverse features, which is crucial for capturing the underlying structure and relationships within the image. By dividing the image into patches, the model can analyze and reconstruct different regions independently, leading to a more comprehensive and generalized feature representation. To be more specific, let I denote the input image. We use the PatchEmbed module to divide I into N image blocks, each of size [12pt]{minimal} $$P P$$ P × P . We then embed each image block into a D -dimensional vector, where D represents the embedding dimension ( embed_dim ). This can be expressed as: 4 [12pt]{minimal} $$ X = {PatchEmbed}(I) {R}^{N D}. $$ X = PatchEmbed ( I ) ∈ R N × D . Next, we add positional embeddings ( pos_embed ) to the embedded image blocks, resulting in: 5 [12pt]{minimal} $$ X = X + {pos}\_ {R}^{N D}. $$ X = X + pos _ embed ∈ R N × D . Here, [12pt]{minimal} $$ {R}$$ R represents the set of real numbers, and [12pt]{minimal} $$$$ × denotes the Cartesian product. The PatchEmbed function maps the input image I to a matrix X with dimensions [12pt]{minimal} $$N D$$ N × D , where N is the number of image blocks and D is the embedding dimension. Incorporating positional embeddings enriches the embedded features with spatial information, thereby enhancing the model’s representation capabilities. These patches and their positional embeddings are then input into the vision transformer encoder, which is composed of multi-head attention and feedforward neural networks, with add and norm operations applied sequentially. The encoded features are compactly represented and then passed through the ViT-based decoder. Specifically, we pass X through a series of Transformer blocks. Each Transformer block comprises a multi-head attention mechanism and a feedforward network, for each [12pt]{minimal} $$_i$$ Block i , the process can be represented as: 6 [12pt]{minimal} $$ X = {Block}_i(X). $$ X = Block i ( X ) . Finally, we apply a normalization layer and a linear classifier to X , resulting in the final output Y : 7 [12pt]{minimal} $$ Y = {head}( {norm}(X)), $$ Y = head ( norm ( X ) ) , where [12pt]{minimal} $$ {Block}_i$$ Block i represents the i th Transformer block, and [12pt]{minimal} $$ {depth}$$ depth signifies the total number of Transformer blocks in the series. The [12pt]{minimal} $$ {head}$$ head function denotes the linear classifier, and [12pt]{minimal} $$ {norm}$$ norm refers to the normalization layer applied to X . This process concludes the transformation of X through the Transformer architecture, producing the output Y with enhanced features suitable for the image reconstruction task. The output Y of the encoder was passed through the ViT decoder. The decoder, similar to the encoder, consists of multi-head attention and feedforward networks, yet includes a drop path module to enhance the stability and performance of the training process. Finally, the reconstructed image is obtained after processing through the ViT decoder. The reconstruction image exhibits enhanced details and reduced artifacts compared to the original masked image. After thorough training of the self-supervised learning model, the weights of its encoder part are saved in this study. Based on this encoder, features are extracted to construct a multi-attribute classification network. In this network, the input features Y , derived from the pre-trained encoder weights via self-supervised learning, serve as shared features. These shared features are processed by a network comprising multiple groups of fully connected layers, with each group corresponding to a specific attribute. The output for each attribute is generated as a classification output. We denote the fully connected layer corresponding to the i th attribute as [12pt]{minimal} $$f_i$$ f i . The classification output for the i th attribute, denoted as [12pt]{minimal} $$C_i$$ C i , is then given by: 8 [12pt]{minimal} $$ C_i = f_i(Y), $$ C i = f i ( Y ) , where Y represents the input features, and [12pt]{minimal} $$f_i$$ f i represents the fully connected layer corresponding to the i th attribute. The classification output [12pt]{minimal} $$C_i$$ C i is the output corresponding to the i th attribute. The proposed multi-attribute classification network processes the encoded features, facilitating the simultaneous generation of classification outputs for multiple attributes. This versatility enhances the network’s adaptability across various scenarios, thereby bolstering its applicability in diverse contexts. In this study, considering the issue of imbalanced category distributions within various attributes, we adopted Focal Loss as the loss function for intra-attribute class classification . Focal Loss is designed to address class imbalance problems, and its formal expression is as follows: Given that there are C categories, where y represents the true category and p is the model’s predicted probability distribution, the Focal Loss is defined as follows: 9 [12pt]{minimal} $$ {FL}(p_t) = - _t (1 - p_t)^ (p_t), $$ FL ( p t ) = - α t ( 1 - p t ) γ log ( p t ) , where [12pt]{minimal} $$p_t$$ p t is the predicted probability for the true category y , where [12pt]{minimal} $$p_t = p$$ p t = p when [12pt]{minimal} $$y = 1$$ y = 1 and [12pt]{minimal} $$p_t = 1 - p$$ p t = 1 - p when [12pt]{minimal} $$y = 0$$ y = 0 . [12pt]{minimal} $$ _t$$ α t is a balance factor used to adjust the weights of each category, and [12pt]{minimal} $$$$ γ is a tuning factor used to reduce the weight of simple samples and increase the weight of difficult samples. In this study, there are a total of 8 attribute classification tasks. The Focal Loss for each attribute i is denoted as [12pt]{minimal} $$_i$$ FL i , and each attribute has a weight [12pt]{minimal} $$w_i$$ w i . Therefore, the overall loss L of the network can be represented as the weighted average of the Focal Loss for each attribute, given by: 10 [12pt]{minimal} $$ L = ^{m} w_i _{i=1}^{m} w_i text{FL}_i}{ _{i=1}^{m} w_i}, $$ L = ∑ i = 1 m w i ∑ i = 1 m w i t e x t FL i ∑ i = 1 m w i , where m represents the total number of attributes. The weight vector [12pt]{minimal} $$w_i$$ w i is defined based on the importance of different attributes as determined by dentists. This formulation calculates the weighted average of the Focal Losses for each attribute, considering their respective weights in the overall loss of the network. Supplementary file 1. |
The history of the formation of the Pan African paediatric surgical Association (PAPSA) | 40b263d6-6893-4ee6-84e2-986821c3f5c9 | 5899112 | Pediatrics[mh] | Below is the link to the electronic supplementary material. Supplementary material 1 (DOCX 1031 KB) Supplementary material 2 (DOCX 476 KB) Supplementary material 3 (DOCX 927 KB) Supplementary material 4 (DOCX 1080 KB) Supplementary material 5 (DOCX 1419 KB) Supplementary material 6 (DOCX 913 KB) Supplementary material 7 (DOCX 672 KB) Supplementary material 8 (DOCX 734 KB) Supplementary material 9 (DOCX 1122 KB)
|
Surgical treatment of cerebellar pontine angle lipoma combined with trigeminal neuralgia: A case report | 5f562824-5d52-47af-b100-b8ad282be9fe | 11749726 | Surgical Procedures, Operative[mh] | Intracranial lipomas are rare lesions accounting for 0.1% to 1.5% of all intracranial tumors and 0.14% of all cerebellopontine angle (CPA) tumors. Approximately 45% of intracranial lipomas occur in the interhemispheric fissure, and approximately 10% occur in the CPA. Although CPA lipomas may be incidentally detected because they can remain asymptomatic, they may become progressively symptomatic as they tend to wrap around the blood vessels and cranial nerves. Patients may experience hearing loss, vertigo, hemifacial spasms, facial sensory disorders, and trigeminal neuralgia (TN). Owing to surgical complications, conservative treatment is currently the mainstay of treatment, especially for slow-growing tumors and painless lesions, and frequent imaging is typically recommended to examine lesion growth. Carbamazepine and oxcarbazepine are first-line pharmacological treatments for TN. However, several patients experience side effects and those with persistent pain are unlikely to respond well to treatment. Surgical removal of CPA lipomas should be considered only in the presence of persistent or progressive symptoms or tumor growth. Herein, we report a case of CPA lipoma combined with TN in a patient who had been receiving TN treatment for 20 years. The patient experienced gradual medication failure and underwent surgical treatment involving resection of part of the lipoma tissue and microvascular decompression of the trigeminal nerve. After surgery, the patient experienced complete relief of TN, and no new neurological dysfunctions appeared.
A 54-year-old female presented with a 20-year history of maxillofacial pain in the left corner of the mouth, predominantly in V3 dermatomes. Initially, pain control was achieved using oral carbamazepine, oxcarbazepine, pregabalin, and painkillers. She had not undergone any cephalic head examinations until 4 years ago, when the pain had worsened, with brain magnetic resonance imaging (MRI) identifying lesions in the left CPA. To achieve pain control, the medication dosage was gradually increased. However, over the last 6 months, the patient’s condition progressively deteriorated, with frequent episodes of pain severely impairing her life. Physical examination revealed that she had percussion pain in the V3 dominant area of the left trigeminal nerve branch. The remaining neurological examinations revealed no abnormalities. The patient had no relevant medical history. She had no personal or family history of brain tumors or TN, and no history of other diseases. The results of routine complete blood count, kidney function, liver function, and coagulation function tests were normal, and no tumors were detected in other parts of the body. Computed tomography (CT) of the head revealed the presence of a hypodense lesion in the left CPA (Fig. A). Brain MRI revealed a 1.3 × 0.9 cm lesion in the left CPA region (Fig. B–D). The mass displayed a high signal intensity on T1W1 and T2W1 images, which was not enhanced after gadolinium administration. Diagnostic imaging suggested lipoma. MRI revealed that the lesion in the left CPA region was unaltered in terms of size over the last 4 years. Head 3-dimensional time-of-flight magnetic resonance angiography (3D-TOF-MRA) detected the superior cerebellar artery (SCA) adjacent to the trigeminal nerve root, and the left CPA tumor was closely related to the trigeminal and auditory nerves (Fig. E). After careful consideration of the patient’s history, physical examination, laboratory tests, and imaging findings, we established a preliminary diagnosis of a left CPA lipoma combined with TN. The CPA lipoma and SCA compressed the trigeminal nerve. As this was a persistent case and poor pain control was achieved with drug therapy, we decided to perform microvascular decompression of the TN via the posterior suboccipital sigmoid approach. The patient consented to undergo surgery.
The specific surgical steps were as follows: Step 1: After administering general anesthesia, the patient was placed in the prone position. A C-shaped incision was made behind the left ear, the scalp and occipital muscles were incised, the occipital bone was exposed, and a milling cutter was used to mill a bone window of approximately 3 × 3 cm in size; this bone window exposed the left transverse sinus and left sigmoid sinus. Step 2: The dura mater was cut under the microscope, the dural flap was turned to the auricular side, and the arachnoid membrane was cut to release the cerebrospinal fluid from the greater occipital pool, followed by the collapse of the cerebellar tissue. Step 3: Examination of the CPA revealed a mass of yellow adipose tissue encircling the facial-auditory complex (Fig. A), which was in close contact with the trigeminal nerve root (Fig. B). Step 4: The arachnoid tissue surrounding the trigeminal nerve was clipped to reveal the cerebral pool segment of the trigeminal nerve from the root entry zone of the trigeminal nerve to Meckel’s cave. This revealed that the SCA was compressing the trigeminal nerve root from the medial measurements (Fig. C), whereas the CPA lipoma abutted the trigeminal nerve root on the lateral side. Step 5: Part of the tumor tissue on the side close to the trigeminal nerve was excised to reduce the tumor size and relieve the compression of the trigeminal nerve root by the tumor. A Teflon felt was placed between the tumor and trigeminal nerve root (Fig. A) and between the trigeminal nerve root and SCA (Fig. B). Step 6: The dura mater was tightly closed and the occipital musculature and skin were sutured. Postoperatively, the patient’s facial pain was completely relieved, with no new neurological disorders. The pathological examination revealed small amounts of fat, fiber, and nerve fiber tissue, which confirmed the diagnosis. The patient was followed up for 6 months and experienced no further TN attacks.
CPA lipomas are rare lesions representing 0.1% of all CPA tumors ; their combination with TN is even rarer. Intracranial lipomas are neither hamartomas nor true tumors but should be considered congenital malformations resulting from the abnormal persistence of the “meninx primitiva” during the development of the subarachnoid space pools and from the differentiation of lipomas. Most intracranial lipomas are asymptomatic and discovered incidentally. The signs and symptoms of CPA lipomas, including hearing loss (62%), dizziness (45%), TN, sensory impairment in the distribution of the fifth cranial nerve (14%), and facial dysfunction (9%), depend on the nerve structures of this region. Imaging techniques should differentiate it from vestibular schwannomas, meningiomas, arachnoid cysts, and epidermoids. CPA lipomas typically present as low-density masses on CT images. Lipomas are hyperintense on T1-weighted images, with a missing signal in the fat-suppression sequence, and not enhanced by gadolinium. The disappearance of the mass using fat-suppression techniques is specific to lipomas. With the widespread use of MRI, histopathological diagnosis is rarely required. Currently, the initial diagnosis is mainly based on MRI features. Our initial diagnosis of CPA lipoma was based on the CT and MRI images of the patient. The diagnosis was confirmed based on the appearance during surgery and final histopathological findings. Classical TN (80–90%) is caused by the compression of the trigeminal nerve root by neighboring blood vessels. Tumors may be responsible for up to 5% of TN cases. Tumor compression of the trigeminal nerve leads to focal demyelination of the trigeminal root, triggering the same high-frequency discharges in the exposed axons as in vascular compression of the nerve. Typical TN usually presents with recurrent remitting pain accompanied by periods of complete pain relief, whereas atypical TN presents with persistent or sub-persistent pain. Secondary TN usually has no periods of inactivity. Detailed preoperative imaging is pivotal for identifying the underlying cause of TN. Fused 3D-TOF-MRA and 3-dimensional constructive interference in steady-state images are reliable, noninvasive tools for evaluating diseased vessels and disease extent in patients with neurovascular compression. This patient underwent a comprehensive 3D-TOF-MRA examination, revealing that the SCA and CPA lipomas were adjacent to the trigeminal nerve root. Intraoperative treatment of blood vessels and tumors is required for complete relief of TN. Lipomas grow slowly. For instance, Totten et al did not observe persistent lipoma growth in 17 patients over a mean follow-up period of 47 months. However, lipomas typically entangle cranial nerves and are highly vascularized; hence, complete surgical removal can be challenging, and even patients who have undergone incomplete excision or simple biopsy may experience serious sequelae postsurgery. Considering its slow or lack of growth, conservative treatment is the first option. Carbamazepine and oxcarbazepine, both anticonvulsant agents, are considered the treatment of choice for controlling paroxysmal pain in patients with TN, regardless of the etiology. These drugs elicit effective pain relief in almost 90% of patients. However, clinical improvements are frequently offset by side effects such as dizziness, diplopia, ataxia, and elevated transaminase levels, 1 or more of which may result in 23% of patients discontinuing treatment. In this case, our patient had TN for 20 years, which was initially manageable with medication. However, even with the gradual increase in medication dosage, the pain became progressively difficult to control. Although surgery is the most invasive technique for treating TN, it offers the lowest pain recurrence rate and highest patient satisfaction. One challenge encountered during this study was the intraoperative removal of the lipoma, which wrapped around the facial auditory nerve and contained abundant tiny blood vessels, making intraoperative hemostasis difficult and risking facial auditory nerve injury. Surgical removal of CPA lipomas may lead to serious consequences, as the tumor tends to attach to nerves, vascular structures, and surrounding tissues. Complete removal of lipomas can be complicated, and even patients who undergo incomplete resection or simple biopsy may experience severe postoperative neurological deficits. Dudulwar et al reported that non-radical resection can result in hearing loss and grade 4 facial palsy. In the present case, after establishing that the lesion was a lipoma and that the SCA was adjacent to the trigeminal nerve, surgery was performed to address the patient’s progressively deteriorating symptoms. Intraoperatively, it was confirmed that the lipoma and SCA compressed the trigeminal nerve root. Lipomas located lateral to the trigeminal nerve root restrict the movement of the nerve root. Although the lipoma encapsulated the facial-auditory nerve complex, no associated symptoms were observed. Instead of complete tumor resection, partial resection was performed to reduce the tumor size. Teflon felt was placed on both sides of the trigeminal nerve root to relieve the compression induced by the tumor and blood vessels. The thickened arachnoid membrane enveloping the trigeminal nerve was cut to loosen the pull on the trigeminal nerve, and decompression of the entire trigeminal nerve in the cerebral pools was performed, which considerably increased the activity of the trigeminal nerve. The TN was completely relieved postoperatively, and other neurological examinations were unaltered. The patient recovered well after the surgery and did not require oral medication. Thus, when conservative treatment for a cerebellar pontine angle lipoma combined with TN is ineffective, trigeminal nerve root surgery is recommended to decompress the trigeminal nerve root. Protecting neurological function is important when resecting a lipoma, and as there are often nerves and blood vessels penetrating the lipoma, total excision of the lipoma should be avoided to prevent new neurological dysfunctions. This study has some limitations. Firstly, this single case cannot be considered generalizable. Secondly, although the 6-month follow-up showed no signs of symptom recurrence, the long-term effects of this treatment still need further investigation.
The coexistence of CPA lipomas and TN is a rare phenomenon, and nonsurgical conservative treatment is preferred. However, surgery should be considered in cases of progressively worsening pain and poor management with conservative treatment. Detailed preoperative MRI and 3D-TOF-MRA are crucial for identifying the primary cause of TN. The goal of surgery is not to completely remove the lipoma but completely relieve trigeminal nerve compression. In the current case, compression of the SCA was identified as the underlying cause of TN, in addition to the CPA lipoma tumor, and surgical release of compression resolved the patient’s facial pain.
We express our gratitude to all those who participated in this study.
Conceptualization: Yu-Ting Yin, Chao Gui. Data curation: Yu-Ting Yin. Formal analysis: Chao Gui. Investigation: Yu-Ting Yin. Methodology: Chao Gui. Supervision: Chao Gui. Validation: Chao Gui. Writing – original draft: Yu-Ting Yin, Chao Gui. Writing – review & editing: Yu-Ting Yin, Chao Gui.
|
Primary care clinician perspectives on automated nephrology e-consults for diabetic kidney disease: a pre-implementation qualitative study | a3602a52-b4f8-43ad-98d0-4070a91d5aa1 | 11149280 | Internal Medicine[mh] | Diabetic kidney disease (DKD) is the leading cause of chronic kidney disease (CKD) and kidney failure in the US and is associated with significant cardiovascular morbidity and mortality . Despite the enormous health burden of DKD, many patients with DKD currently do not receive evidence-based, guideline-recommended treatment crucial for reducing DKD progression and complications. In studies examining diverse health systems and population settings, the proportion of persons receiving guideline-recommended medications to slow DKD progression, such as angiotensin converting enzyme inhibitors (ACEi) and angiotensin II receptor blockers (ARB), has consistently been 50–60% without improvement over time . Projected increases in diabetes and kidney failure incidence highlight the critical need for innovative population health approaches for improving delivery of optimal DKD care . Previously studied interventions to improve DKD or CKD care have shown mixed success and have included educational programs directed at PCPs, audit-based performance feedback, and electronic health record-embedded clinical decision support . A meta-analysis of randomized controlled trials evaluating interventions to improve CKD management in the primary care setting found no benefit to either computer-assisted or education-related interventions, compared to usual care, for the outcomes of improving guideline-concordant ACEi or ARB prescription, proteinuria assessment, or blood pressure control . Implementations of clinical decision support have been hindered by alert fatigue and poor individualization of actionable recommendations to PCPs, leading to inconsistent improvements in care . Proactive electronic consultations (e-consults) are an emerging system-level intervention strategy that could potentially allow nephrologists to provide timely and evidence-based guidance to PCPs engaged in early DKD care. In contrast to the traditional referral or e-consult framework that requires PCPs to initiate the consultation, proactive e-consults involve a strategy to identify patients who could benefit from specialist input at the system-level, after which specialists would conduct a targeted chart review and provide their recommendations to PCPs as an e-consult . Strategies to identify patients for DKD management may include laboratory criteria (e.g., elevated albuminuria) or validated kidney failure risk prediction models , which can be applied at the health system level to identify the target population for proactive e-consults. Patients identified in this manner can then be individually reviewed by nephrologists, resulting in individualized recommendations that are delivered to the patient’s PCP in the form of an e-consult message. The proactive nature, which does not require PCPs to initiate the e-consult request, is the key distinguishing feature compared with traditional e-consults or referral mechanisms. E-consult documentation would leverage existing documentation infrastructure in the electronic health record. E-consult contents, including the specialist recommendations as well as subsequent communications between the PCP and the specialist, become part of the permanent electronic health record. E-consults are visible to all clinicians accessing the chart, and in many health systems, visible to patients as well. Potential advantages of this e-consult strategy include (1) the proactive nature, which does not rely on PCPs’ explicit recognition, diagnosis of DKD, or decision to refer, (2) expert specialist input, which allows recommendations to be more tailored to individual patients compared with what is possible with clinical decision support rules, and (3) the interactive capability, which allows PCPs to electronically discuss management with specialists to clarify or further tailor treatment recommendations to specific patient scenarios. Conversely, potential disadvantages of a proactive e-consult strategy include unclear acceptability of the proactive approach among PCPs or patients and logistics of primary care workflows for implementing unsolicited e-consult recommendations, which may depend substantially on how primary care clinics are set up and operate within different health systems. Versions of proactive e-consults have been implemented in a few settings, such as for osteoporosis management in a regional veteran population and for high-risk CKD care in the Kaiser Permanente Hawaii health system . These proactive e-consult programs demonstrated only modest effectiveness in improving treatment rates. The objective of this study was to explore perspectives from PCPs practicing in three different health systems about potential barriers and facilitators associated with proactive e-consults. These findings may provide valuable pre-implementation insights to inform the optimal design and development of a proactive e-consult program to improve guideline-concordant DKD care delivery. Study design and population We conducted semi-structured qualitative interviews with PCPs. Participants were purposively sampled across practice sites in three different health systems, each of which comprised multiple primary clinic sites: an academic health system, an urban public safety net health system serving under-insured and uninsured populations, and a Veterans Affairs (VA) health system. Each health system also had its own network of specialists, including nephrology. We approached medical directors at primary care clinic sites in each health system to identify potentially eligible PCPs, who were then invited to participate via email. In some clinics, PCPs were invited by email from the study team after being referred by their director; in other clinics, the director shared the study invitation broadly to PCPs. Eligible PCPs were defined as clinicians actively practicing primary care. We used broad inclusion criteria to mimic clinicians in a busy primary care practice. Physicians (MD or DO), nurse practitioners, and physician assistants were eligible, and we did not require a minimum percent professional effort dedicated to primary care as long as it was not zero. Due to limited independent practice experience, trainees were not included. Verbal consent was obtained before each interview. The study protocol was approved by the University of California, San Francisco Institutional Review Board (#22-37188). Data collection Semi-structured interviews were conducted with Zoom video conference by a physician-investigator (C.D.C.) trained in interviewing methodology. All interviews were completed between February 2023 and October 2023. Interviews lasted up to one hour and were audio recorded and transcribed. An interview guide was used to facilitate discussion about potential implementation of a proactive nephrology e-consult program for DKD management (Item S1). Questions were designed to elicit challenges in delivery of guideline-recommended DKD care, potential barriers and facilitators to a proactive e-consult intervention, and suggestions for optimizing the intervention’s effectiveness and integration into PCPs’ existing workflows. Demographic and practice-related information were self-reported by participants, including: gender, race and ethnicity, years in practice, training background, and percent time dedicated to direct patient care. Sample size was guided by interim assessment of interview data for thematic saturation, and in similar prior work, we reached thematic saturation at 14 interviews . Analysis Data were analyzed using a rapid qualitative analysis methodology . Interview transcripts for each participant were reviewed and consolidated into a matrix organized by broad themes and subthemes as they emerged from the data related to PCPs’ responses to proactive e-consults. For this process, three members of the research team (C.D.C., D.D., D.S.T.) held regular in-person meetings to iteratively review transcripts and identify, refine, and achieve consensus on a final list of themes and subthemes. Representative quotations were extracted to illustrate subthemes. We followed the Consolidated Criteria for Reporting Qualitative Research checklist for reporting of qualitative research . We conducted semi-structured qualitative interviews with PCPs. Participants were purposively sampled across practice sites in three different health systems, each of which comprised multiple primary clinic sites: an academic health system, an urban public safety net health system serving under-insured and uninsured populations, and a Veterans Affairs (VA) health system. Each health system also had its own network of specialists, including nephrology. We approached medical directors at primary care clinic sites in each health system to identify potentially eligible PCPs, who were then invited to participate via email. In some clinics, PCPs were invited by email from the study team after being referred by their director; in other clinics, the director shared the study invitation broadly to PCPs. Eligible PCPs were defined as clinicians actively practicing primary care. We used broad inclusion criteria to mimic clinicians in a busy primary care practice. Physicians (MD or DO), nurse practitioners, and physician assistants were eligible, and we did not require a minimum percent professional effort dedicated to primary care as long as it was not zero. Due to limited independent practice experience, trainees were not included. Verbal consent was obtained before each interview. The study protocol was approved by the University of California, San Francisco Institutional Review Board (#22-37188). Semi-structured interviews were conducted with Zoom video conference by a physician-investigator (C.D.C.) trained in interviewing methodology. All interviews were completed between February 2023 and October 2023. Interviews lasted up to one hour and were audio recorded and transcribed. An interview guide was used to facilitate discussion about potential implementation of a proactive nephrology e-consult program for DKD management (Item S1). Questions were designed to elicit challenges in delivery of guideline-recommended DKD care, potential barriers and facilitators to a proactive e-consult intervention, and suggestions for optimizing the intervention’s effectiveness and integration into PCPs’ existing workflows. Demographic and practice-related information were self-reported by participants, including: gender, race and ethnicity, years in practice, training background, and percent time dedicated to direct patient care. Sample size was guided by interim assessment of interview data for thematic saturation, and in similar prior work, we reached thematic saturation at 14 interviews . Data were analyzed using a rapid qualitative analysis methodology . Interview transcripts for each participant were reviewed and consolidated into a matrix organized by broad themes and subthemes as they emerged from the data related to PCPs’ responses to proactive e-consults. For this process, three members of the research team (C.D.C., D.D., D.S.T.) held regular in-person meetings to iteratively review transcripts and identify, refine, and achieve consensus on a final list of themes and subthemes. Representative quotations were extracted to illustrate subthemes. We followed the Consolidated Criteria for Reporting Qualitative Research checklist for reporting of qualitative research . A total of 18 interviews were conducted among 6 academic, 8 safety net, and 4 VA PCPs (Table ). The median number of years in practice was 6 (interquartile range 4–12). Training backgrounds included internal medicine ( n = 13), family medicine ( n = 4), and nurse practitioner ( n = 1). To provide contextual background, we will first summarize the barriers to delivery of guideline-recommended DKD care as reported by PCPs in their current practice before presenting the results of thematic analysis. PCPs identified a number of barriers including (1) difficulty staying up to date with clinical practice guidelines, (2) challenges related to new medications and medication management, and (3) limitations due to poor patient access and continuity of care (Table ). These challenges were consistently expressed by PCPs across all three health systems. One frequent challenge was keeping up with the volume of evolving clinical practice guidelines across multiple chronic conditions, especially after their training period (Quotation [Q]1; Table ). Many reported having internalized elements of DKD care during their training (e.g., knowing to use ACEi or ARB) but reported low explicit awareness of specific guideline criteria, organizations creating clinical practice guidelines, and updates to those guidelines. (Q2; Table ). Limited comfort with prescribing and patient counseling for new drug classes, such as sodium-glucose cotransporter 2 (SGLT2) inhibitors, was also cited as a barrier to guideline-recommended care (Q3; Table ). PCPs reported that prescribing indicated medications could be deferred if patients were having difficulty adhering to their current regimen. This was compounded by reported difficulty counseling patients on why changing their regimen or adding new medications was indicated, particularly for patients whose blood pressure and diabetes were well-controlled on existing regimens: My experience has been it’s really hard to drive pushing [guideline-indicated] medicines as hard as we need to push them, partly because the patient population we take care of takes a long time to develop trust with us. And the last thing we want to do is, when they get in, say…we’re gonna give you…an SGLT2, and we’re gonna…drive your lisinopril up as high as we can get it, even though…your blood pressure looks okay…it’s hard for patients to take multiple, multiple changes from us. (Q4; Table ) Limitations in patient access to care and continuity of care were also consistently reported as significant barriers impeding safe prescribing and monitoring practices needed for optimal DKD care; even when patients are able to attend appointments, more active medical and/or social issues may be prioritized over optimizing chronic disease management (Q5 & Q6; Table ). With regard to the topic of proactive e-consults, PCPs were generally supportive of the concept as a mechanism for facilitating guideline-recommended care delivery and ensuring optimal treatment for patients with DKD. Three major themes emerged from the interviews: (1) perceived potential benefits of proactive e-consults, (2) concerns about the proactive nature of e-consults, and (3) leveraging care teams to facilitate recommended DKD care. Theme 1: Potential benefits of proactive e-consults Subtheme: Educational value PCPs acknowledged the potential educational value of nephrology e-consults in facilitating delivery of guideline-recommended DKD care, particularly in the setting of rapidly evolving guidelines and limited early experience with new medications. The elements identified as potentially most high-yield to include in nephrology e-consults were specific medication recommendations (dosing, patient counseling, and how to monitor), diagnostic workup needed to establish CKD etiology, and guidance on when to refer a patient to nephrology. In addition, it was suggested that while concise, concrete recommendations were preferred, PCPs also would appreciate e-consults containing references to the practice guidelines behind recommendations, which could help them internalize new guideline changes and extend the knowledge in caring for other patients (Q1; Table ). The ability to have two-way, “back-and-forth” interaction with nephrologists was also identified as a particularly valuable feature of e-consults for promoting learning: As a PCP, I learn a lot from consultants. And I learn a lot more when there’s a back and forth with a specialist. And so like having an e-consult platform – incredibly helpful, right? Cause they don’t even need to see the patient necessarily. If I feel supported to start a medication, I have the parameters like what dose do I start, what’s my target dose, how do I get there, what monitoring do I need to do. What side effects do you commonly see? If I can get all that information from a specialist, I’m happy to do it. I don’t want to inconvenience a patient, and it’s a learning opportunity for me. (Q2; Table ) Subtheme: Improved access to specialists PCPs described a degree of hesitance to initiating nephrology referrals, acknowledging high specialist workload and the perception that often, much of what would be done in nephrology clinic could be done in primary care (Q3; Table ). Participants expressed that proactive e-consults could “lower the activation energy” for accessing nephrology expertise and could be a way to overcome PCPs’ hesitance to refer, particularly for patients with less severe kidney disease who may have unrecognized high-risk features (Q4; Table ). In addition, participants noted e-consults could serve as a means to provide nephrology care for patients who face barriers to attending specialist appointments or prefer to see only their PCP. Subtheme: reassurance of care plan PCPs also saw value of proactive e-consults if they provided reassurance of the appropriateness of the existing care plan for each patient based on the most recent evidence and guidelines, and in particular that providers were not “missing anything” in the workup and treatment related to kidney disease (Q5; Table ). In turn, this reassurance could allow PCPs to focus on working with patients to overcome obstacles to guideline-recommended care and to facilitate adherence, rather than considering whether they were overlooking any aspects to achieve optimal DKD outcomes: I think most of us in primary care would really welcome the input. And you know, if only to be able to focus our efforts on trying to overcome the obstacles the patient faces in being adherent, instead of having all of our efforts being trying to figure out what’s the next best medicine or what should we be doing. So I think having that sort of clearly outlined would then allow us to focus our energy on trying to implement it instead of trying to figure it out. (Q6; Table ) Theme 2: Concerns about proactive e-consults Subtheme: Privacy concerns for patients Participants expressed that the majority of their patients would likely receive proactive chart review by specialists positively as an opportunity to improve their health. However, there maybe be a small number of patients who would be less open to such a program due to privacy concerns, particularly regarding unsolicited chart review by providers they have not met without their explicit consent (Q7; Table ). PCPs suggested that patients could be more accepting if the process were framed as a system-level effort to help ensure patients are getting recommended care: I don’t think [patients] would mind it. I mean, it’s all part of our system of trying to improve things. I mean, in that if we had to sell it to the patients, that’s the way I would sell it, is that we’ve been doing this a lot, right? We’re all part of one big system that’ll work together to help you. (Q8; Table ) Subtheme: Appearance of substandard care PCPs also raised the concern that e-consults could be viewed as documenting their delivery of substandard care in a way that is visible to others, potentially including patients: I guess there might be people who feel like now that a specialist gave me recommendations, I have to abide by them and it’s another thing on my plate to do, and it’s in the chart. So it’s like, if I don’t follow these recommendations, then that’s like I’m providing substandard care. And it’s documented. (Q9; Table ) Several participants emphasized it was critical that proactive e-consult recommendations were written in a manner that would not feel punitive or judgmental when identifying patients not receiving guideline-recommended treatment, recognizing that there may be legitimate barriers to optimal care delivery in individual patients that PCPs are actively addressing (Q10; Table ). Several participants also expressed concerns about potential medicolegal implications of proactive e-consults, particularly if specialist recommendations documented in an e-consult were not followed (Q11; Table ). Subtheme: Increased burden on PCP PCPs at all sites raised concerns about the potential for proactive e-consults to increase the workload of already busy PCPs, who would then need to arrange a means of implementing the newly recommended care. They reported that clinicians were already familiar with receiving unsolicited input on patient care (e.g., automatically generated lists of patients needing vaccination or overdue for cancer screening), and that any new intervention would face competition for PCPs’ limited time in an “attention economy” (Q12; Table ). To mitigate the time burden associated with proactive e-consults, multiple participants suggested that they be delivered shortly prior to patients’ upcoming appointments so that PCPs can see and implement the recommendations within the context of a patient visit (Q13; Table ). Theme 3: Leveraging care teams to facilitate recommended DKD care Subtheme: Clinic-based pharmacist or nurse could implement recommendations PCPs in the VA and safety net practice settings identified a role for delegating implementation of guideline-recommended care from proactive e-consults to other clinic staff, such as a clinic nurse or ambulatory pharmacist, who are often already engaged with chronic disease medication management and panel management activities. This delegation would likely be more efficient and would mitigate the added time burden to PCPs: I think it’s still fine for the recipient to be the PCP…it’s more about just the operations of what happens after the PCP gets the message, and I think when people feel like you’re just pitting more on them…It’s a recipe for burnout. And so I think it may be more of like a training piece, and bringing these various stakeholders together, so that like, the director of ambulatory pharmacy has signed off on like…if you get these e-consults, you can feel free to forward it to your clinic pharmacist, and then they can reach out to the patient. Or maybe nursing does it, and they have a script for education that they provide…Like we have a lot of team resources in primary care, but I would say they’re not optimally utilized, and just a lot just falls on the PCP. And so the more we can unburden the PCP and say, hey, you’re gonna get this message, but we’ve gotten buy-in from the clinical pharmacy team that you can send this to them, and they can run with it. Just something like that can really go a long way. (Q14; Table ) Meanwhile, PCPs at the academic health system tended to envision implementing e-consult recommendations personally, with minimal involvement from other clinic staff apart from scheduling an appointment to discuss DKD care. In addition, participants felt it was important for e-consult recommendations to be visible to all providers in the patient’s electronic health record, given the frequency of co-management and provider cross-coverage. Subtheme: Desired level of involvement of PCP PCPs varied in their preferences on their desired level of involvement with receiving and implementing proactive e-consults for DKD care. Participants acknowledged that most PCPs would want to be notified of potential medication changes but very few would feel the need to expressly sign off on implementation of e-consult recommendations, particularly in the context of co-management by clinical pharmacists (Q15 & 16; Table ). Some participants even expressed ambivalence about the necessity of being notified as the PCP, citing the overwhelming volume of in-basket messages and the fact that any interim changes would eventually be reviewed by the PCP in subsequent visits: I don’t think I need to necessarily sign off on it. I do trust the pharmacists and the specialists that we work with…And then I think the FYI is mostly just so that I have an idea what’s happening. But I also understand that we get a lot of FYI’s in primary care…So, I could go either way on that. (Q17; Table ). Subtheme: Educational value PCPs acknowledged the potential educational value of nephrology e-consults in facilitating delivery of guideline-recommended DKD care, particularly in the setting of rapidly evolving guidelines and limited early experience with new medications. The elements identified as potentially most high-yield to include in nephrology e-consults were specific medication recommendations (dosing, patient counseling, and how to monitor), diagnostic workup needed to establish CKD etiology, and guidance on when to refer a patient to nephrology. In addition, it was suggested that while concise, concrete recommendations were preferred, PCPs also would appreciate e-consults containing references to the practice guidelines behind recommendations, which could help them internalize new guideline changes and extend the knowledge in caring for other patients (Q1; Table ). The ability to have two-way, “back-and-forth” interaction with nephrologists was also identified as a particularly valuable feature of e-consults for promoting learning: As a PCP, I learn a lot from consultants. And I learn a lot more when there’s a back and forth with a specialist. And so like having an e-consult platform – incredibly helpful, right? Cause they don’t even need to see the patient necessarily. If I feel supported to start a medication, I have the parameters like what dose do I start, what’s my target dose, how do I get there, what monitoring do I need to do. What side effects do you commonly see? If I can get all that information from a specialist, I’m happy to do it. I don’t want to inconvenience a patient, and it’s a learning opportunity for me. (Q2; Table ) Subtheme: Improved access to specialists PCPs described a degree of hesitance to initiating nephrology referrals, acknowledging high specialist workload and the perception that often, much of what would be done in nephrology clinic could be done in primary care (Q3; Table ). Participants expressed that proactive e-consults could “lower the activation energy” for accessing nephrology expertise and could be a way to overcome PCPs’ hesitance to refer, particularly for patients with less severe kidney disease who may have unrecognized high-risk features (Q4; Table ). In addition, participants noted e-consults could serve as a means to provide nephrology care for patients who face barriers to attending specialist appointments or prefer to see only their PCP. Subtheme: reassurance of care plan PCPs also saw value of proactive e-consults if they provided reassurance of the appropriateness of the existing care plan for each patient based on the most recent evidence and guidelines, and in particular that providers were not “missing anything” in the workup and treatment related to kidney disease (Q5; Table ). In turn, this reassurance could allow PCPs to focus on working with patients to overcome obstacles to guideline-recommended care and to facilitate adherence, rather than considering whether they were overlooking any aspects to achieve optimal DKD outcomes: I think most of us in primary care would really welcome the input. And you know, if only to be able to focus our efforts on trying to overcome the obstacles the patient faces in being adherent, instead of having all of our efforts being trying to figure out what’s the next best medicine or what should we be doing. So I think having that sort of clearly outlined would then allow us to focus our energy on trying to implement it instead of trying to figure it out. (Q6; Table ) PCPs acknowledged the potential educational value of nephrology e-consults in facilitating delivery of guideline-recommended DKD care, particularly in the setting of rapidly evolving guidelines and limited early experience with new medications. The elements identified as potentially most high-yield to include in nephrology e-consults were specific medication recommendations (dosing, patient counseling, and how to monitor), diagnostic workup needed to establish CKD etiology, and guidance on when to refer a patient to nephrology. In addition, it was suggested that while concise, concrete recommendations were preferred, PCPs also would appreciate e-consults containing references to the practice guidelines behind recommendations, which could help them internalize new guideline changes and extend the knowledge in caring for other patients (Q1; Table ). The ability to have two-way, “back-and-forth” interaction with nephrologists was also identified as a particularly valuable feature of e-consults for promoting learning: As a PCP, I learn a lot from consultants. And I learn a lot more when there’s a back and forth with a specialist. And so like having an e-consult platform – incredibly helpful, right? Cause they don’t even need to see the patient necessarily. If I feel supported to start a medication, I have the parameters like what dose do I start, what’s my target dose, how do I get there, what monitoring do I need to do. What side effects do you commonly see? If I can get all that information from a specialist, I’m happy to do it. I don’t want to inconvenience a patient, and it’s a learning opportunity for me. (Q2; Table ) PCPs described a degree of hesitance to initiating nephrology referrals, acknowledging high specialist workload and the perception that often, much of what would be done in nephrology clinic could be done in primary care (Q3; Table ). Participants expressed that proactive e-consults could “lower the activation energy” for accessing nephrology expertise and could be a way to overcome PCPs’ hesitance to refer, particularly for patients with less severe kidney disease who may have unrecognized high-risk features (Q4; Table ). In addition, participants noted e-consults could serve as a means to provide nephrology care for patients who face barriers to attending specialist appointments or prefer to see only their PCP. PCPs also saw value of proactive e-consults if they provided reassurance of the appropriateness of the existing care plan for each patient based on the most recent evidence and guidelines, and in particular that providers were not “missing anything” in the workup and treatment related to kidney disease (Q5; Table ). In turn, this reassurance could allow PCPs to focus on working with patients to overcome obstacles to guideline-recommended care and to facilitate adherence, rather than considering whether they were overlooking any aspects to achieve optimal DKD outcomes: I think most of us in primary care would really welcome the input. And you know, if only to be able to focus our efforts on trying to overcome the obstacles the patient faces in being adherent, instead of having all of our efforts being trying to figure out what’s the next best medicine or what should we be doing. So I think having that sort of clearly outlined would then allow us to focus our energy on trying to implement it instead of trying to figure it out. (Q6; Table ) Subtheme: Privacy concerns for patients Participants expressed that the majority of their patients would likely receive proactive chart review by specialists positively as an opportunity to improve their health. However, there maybe be a small number of patients who would be less open to such a program due to privacy concerns, particularly regarding unsolicited chart review by providers they have not met without their explicit consent (Q7; Table ). PCPs suggested that patients could be more accepting if the process were framed as a system-level effort to help ensure patients are getting recommended care: I don’t think [patients] would mind it. I mean, it’s all part of our system of trying to improve things. I mean, in that if we had to sell it to the patients, that’s the way I would sell it, is that we’ve been doing this a lot, right? We’re all part of one big system that’ll work together to help you. (Q8; Table ) Subtheme: Appearance of substandard care PCPs also raised the concern that e-consults could be viewed as documenting their delivery of substandard care in a way that is visible to others, potentially including patients: I guess there might be people who feel like now that a specialist gave me recommendations, I have to abide by them and it’s another thing on my plate to do, and it’s in the chart. So it’s like, if I don’t follow these recommendations, then that’s like I’m providing substandard care. And it’s documented. (Q9; Table ) Several participants emphasized it was critical that proactive e-consult recommendations were written in a manner that would not feel punitive or judgmental when identifying patients not receiving guideline-recommended treatment, recognizing that there may be legitimate barriers to optimal care delivery in individual patients that PCPs are actively addressing (Q10; Table ). Several participants also expressed concerns about potential medicolegal implications of proactive e-consults, particularly if specialist recommendations documented in an e-consult were not followed (Q11; Table ). Subtheme: Increased burden on PCP PCPs at all sites raised concerns about the potential for proactive e-consults to increase the workload of already busy PCPs, who would then need to arrange a means of implementing the newly recommended care. They reported that clinicians were already familiar with receiving unsolicited input on patient care (e.g., automatically generated lists of patients needing vaccination or overdue for cancer screening), and that any new intervention would face competition for PCPs’ limited time in an “attention economy” (Q12; Table ). To mitigate the time burden associated with proactive e-consults, multiple participants suggested that they be delivered shortly prior to patients’ upcoming appointments so that PCPs can see and implement the recommendations within the context of a patient visit (Q13; Table ). Participants expressed that the majority of their patients would likely receive proactive chart review by specialists positively as an opportunity to improve their health. However, there maybe be a small number of patients who would be less open to such a program due to privacy concerns, particularly regarding unsolicited chart review by providers they have not met without their explicit consent (Q7; Table ). PCPs suggested that patients could be more accepting if the process were framed as a system-level effort to help ensure patients are getting recommended care: I don’t think [patients] would mind it. I mean, it’s all part of our system of trying to improve things. I mean, in that if we had to sell it to the patients, that’s the way I would sell it, is that we’ve been doing this a lot, right? We’re all part of one big system that’ll work together to help you. (Q8; Table ) PCPs also raised the concern that e-consults could be viewed as documenting their delivery of substandard care in a way that is visible to others, potentially including patients: I guess there might be people who feel like now that a specialist gave me recommendations, I have to abide by them and it’s another thing on my plate to do, and it’s in the chart. So it’s like, if I don’t follow these recommendations, then that’s like I’m providing substandard care. And it’s documented. (Q9; Table ) Several participants emphasized it was critical that proactive e-consult recommendations were written in a manner that would not feel punitive or judgmental when identifying patients not receiving guideline-recommended treatment, recognizing that there may be legitimate barriers to optimal care delivery in individual patients that PCPs are actively addressing (Q10; Table ). Several participants also expressed concerns about potential medicolegal implications of proactive e-consults, particularly if specialist recommendations documented in an e-consult were not followed (Q11; Table ). PCPs at all sites raised concerns about the potential for proactive e-consults to increase the workload of already busy PCPs, who would then need to arrange a means of implementing the newly recommended care. They reported that clinicians were already familiar with receiving unsolicited input on patient care (e.g., automatically generated lists of patients needing vaccination or overdue for cancer screening), and that any new intervention would face competition for PCPs’ limited time in an “attention economy” (Q12; Table ). To mitigate the time burden associated with proactive e-consults, multiple participants suggested that they be delivered shortly prior to patients’ upcoming appointments so that PCPs can see and implement the recommendations within the context of a patient visit (Q13; Table ). Subtheme: Clinic-based pharmacist or nurse could implement recommendations PCPs in the VA and safety net practice settings identified a role for delegating implementation of guideline-recommended care from proactive e-consults to other clinic staff, such as a clinic nurse or ambulatory pharmacist, who are often already engaged with chronic disease medication management and panel management activities. This delegation would likely be more efficient and would mitigate the added time burden to PCPs: I think it’s still fine for the recipient to be the PCP…it’s more about just the operations of what happens after the PCP gets the message, and I think when people feel like you’re just pitting more on them…It’s a recipe for burnout. And so I think it may be more of like a training piece, and bringing these various stakeholders together, so that like, the director of ambulatory pharmacy has signed off on like…if you get these e-consults, you can feel free to forward it to your clinic pharmacist, and then they can reach out to the patient. Or maybe nursing does it, and they have a script for education that they provide…Like we have a lot of team resources in primary care, but I would say they’re not optimally utilized, and just a lot just falls on the PCP. And so the more we can unburden the PCP and say, hey, you’re gonna get this message, but we’ve gotten buy-in from the clinical pharmacy team that you can send this to them, and they can run with it. Just something like that can really go a long way. (Q14; Table ) Meanwhile, PCPs at the academic health system tended to envision implementing e-consult recommendations personally, with minimal involvement from other clinic staff apart from scheduling an appointment to discuss DKD care. In addition, participants felt it was important for e-consult recommendations to be visible to all providers in the patient’s electronic health record, given the frequency of co-management and provider cross-coverage. Subtheme: Desired level of involvement of PCP PCPs varied in their preferences on their desired level of involvement with receiving and implementing proactive e-consults for DKD care. Participants acknowledged that most PCPs would want to be notified of potential medication changes but very few would feel the need to expressly sign off on implementation of e-consult recommendations, particularly in the context of co-management by clinical pharmacists (Q15 & 16; Table ). Some participants even expressed ambivalence about the necessity of being notified as the PCP, citing the overwhelming volume of in-basket messages and the fact that any interim changes would eventually be reviewed by the PCP in subsequent visits: I don’t think I need to necessarily sign off on it. I do trust the pharmacists and the specialists that we work with…And then I think the FYI is mostly just so that I have an idea what’s happening. But I also understand that we get a lot of FYI’s in primary care…So, I could go either way on that. (Q17; Table ). PCPs in the VA and safety net practice settings identified a role for delegating implementation of guideline-recommended care from proactive e-consults to other clinic staff, such as a clinic nurse or ambulatory pharmacist, who are often already engaged with chronic disease medication management and panel management activities. This delegation would likely be more efficient and would mitigate the added time burden to PCPs: I think it’s still fine for the recipient to be the PCP…it’s more about just the operations of what happens after the PCP gets the message, and I think when people feel like you’re just pitting more on them…It’s a recipe for burnout. And so I think it may be more of like a training piece, and bringing these various stakeholders together, so that like, the director of ambulatory pharmacy has signed off on like…if you get these e-consults, you can feel free to forward it to your clinic pharmacist, and then they can reach out to the patient. Or maybe nursing does it, and they have a script for education that they provide…Like we have a lot of team resources in primary care, but I would say they’re not optimally utilized, and just a lot just falls on the PCP. And so the more we can unburden the PCP and say, hey, you’re gonna get this message, but we’ve gotten buy-in from the clinical pharmacy team that you can send this to them, and they can run with it. Just something like that can really go a long way. (Q14; Table ) Meanwhile, PCPs at the academic health system tended to envision implementing e-consult recommendations personally, with minimal involvement from other clinic staff apart from scheduling an appointment to discuss DKD care. In addition, participants felt it was important for e-consult recommendations to be visible to all providers in the patient’s electronic health record, given the frequency of co-management and provider cross-coverage. PCPs varied in their preferences on their desired level of involvement with receiving and implementing proactive e-consults for DKD care. Participants acknowledged that most PCPs would want to be notified of potential medication changes but very few would feel the need to expressly sign off on implementation of e-consult recommendations, particularly in the context of co-management by clinical pharmacists (Q15 & 16; Table ). Some participants even expressed ambivalence about the necessity of being notified as the PCP, citing the overwhelming volume of in-basket messages and the fact that any interim changes would eventually be reviewed by the PCP in subsequent visits: I don’t think I need to necessarily sign off on it. I do trust the pharmacists and the specialists that we work with…And then I think the FYI is mostly just so that I have an idea what’s happening. But I also understand that we get a lot of FYI’s in primary care…So, I could go either way on that. (Q17; Table ). In our interviews with PCPs from three health systems, we found that PCPs were generally supportive of the concept of proactive nephrology e-consults for DKD management. PCPs identified common barriers to guideline-recommended DKD care delivery and noted potential benefits of having proactive guidance for overcoming some of these barriers and ensuring optimal DKD care. Meanwhile, PCPs identified key challenges for implementation of proactive e-consults and outlined potential mitigation strategies for successful integration into the primary care setting. PCP support for proactive e-consults as a strategy to facilitate optimal DKD care delivery was largely in the context of its ability to guide the prescribing, counseling, and monitoring related to newer DKD medications, such as SGLT2 inhibitors, and its ability to assure that individual patients were getting appropriate kidney care. Furthermore, the recommendations offered in e-consults could serve an educational function and reinforce PCP knowledge and self-efficacy in caring for patients other than those who received e-consults . Despite these potential benefits, however, PCPs acknowledged barriers to guideline-recommended DKD care that would not be addressed by proactive e-consults, in particular social risk factors. Such factors include patient low health literacy, challenges to attend clinic appointments regularly, and inability to afford medications. Even when PCPs are aware of the optimal DKD management and practice guidelines, other more immediate issues may take priority when patients are struggling with housing instability or substance use. Thus, while proactive e-consults have the potential to address some major barriers to optimal DKD care, they would not remedy all the barriers that PCPs identified in our study. Comparing responses among PCPs practicing in three different health systems yielded some notable findings. Not surprisingly, the challenges to optimal care delivery were largely shared across health systems and have been documented previously . A notable contrast was found when PCPs were discussing how they envisioned how proactive e-consults for DKD management might be implemented in their practice. PCPs within the VA and safety net settings frequently referred to existing collaborations with clinic-based pharmacists and nurses as a logical resource for implementing e-consult recommendations. Since pharmacists and/or nurses were already so involved with medication management for patients with chronic diseases, leveraging their support for medication recommendations from proactive e-consults was felt to be a natural extension of the care they were already providing. Some PCPs even felt that e-consult recommendations could be sent directly to the nurse or pharmacist involved with a patient’s care, bypassing the PCP altogether. Meanwhile, PCPs in the academic health system tended to envision e-consults being followed up with an in-person visit in which the PCP could discuss directly with patients the e-consult recommendations. There, considerations such as timing proactive e-consults close to upcoming patient appointments was felt to be more crucial to ensuring the recommendations would be acted upon, otherwise they could risk becoming lost in volume of messages that PCPs receive daily. Based on our results, several implications emerged for the design of a potential proactive e-consult intervention. First, it must be clear that the incorporation of proactive e-consults is intended to be supportive rather than critical, including non-judgmental language that could be perceived positively by clinicians and patients who have access to their electronic health record. PCPs should be informed and oriented to the purpose of proactive e-consults and the e-consult recommendations should avoid the implication that current care is substandard. The language for e-consult recommendations should be framed as a system-level improvement effort to optimize care delivery population-wide, rather than critiquing individual cases. Second, the implementation of a proactive e-consult program needs to be tailored to clinic workflows and resources available within health systems . Leveraging care teams such as clinic pharmacists or nurses could substantially improve the effectiveness of a proactive e-consult intervention compared with relying solely on PCPs to enact specialist recommendations. In addition, the recipient of proactive e-consults can be flexible and does not necessarily need to be the PCP. In determining these details of proactive e-consult design, it is crucial to involve and establish buy-in from all key clinic stakeholders and collaborate on a clear, agreed-upon protocol for handling e-consult recommendations. Strengths of our study included exploration of PCP perspectives across diverse health systems, providing insights into the potential strategies for implementing proactive e-consults in practice settings with different workflows and resources. Limitations included the hypothetical nature of the interview questions, as responses may have differed among PCPs who had experience with proactive e-consult programs. Geographically, all three health systems were based in one city, thus limiting generalizability. Participants were predominantly female: while this may reflect demographic characteristics of local PCPs, it may also affect generalizability . We purposely allowed clinic directors to determine the most appropriate recruitment scheme, which varied across clinics, and as such we were unable to systematically collect characteristics of PCPs who were invited but did not participate. We focused the scope of our interviews to PCP perspectives across different health systems, but examining the perspectives of nephrologist and patient stakeholders on the acceptability of proactive e-consults will be a critical future direction. Additionally, we did not explore the potential financial considerations associated with the implementation of proactive e-consults. Traditional e-consults are sometimes a reimbursed clinical activity for which patients have a co-pay and require pre-authorization. While it is not clear how proactive e-consults would be funded across health systems, a traditional fee-for-service reimbursement mechanism could be inherently at odds with the nature of a proactive e-consult strategy. In summary, we found that PCPs saw potential benefits of proactive e-consults for DKD management, noting particular value from its ability to promote optimal treatment based on the most recent evidence and guidelines, reassure PCPs of the diagnosis and treatment plan, and serve an educational role to help PCPs stay up to date. PCPs also identified mitigation strategies for potential challenges in implementing proactive e-consults: recognizing the variability in workflows and resources between health systems, leveraging clinic support staff to enact e-consult recommendations, and framing e-consults as a system improvement effort rather than individual PCP critiques emerged as key considerations for successful implementation. Below is the link to the electronic supplementary material. Supplementary Material 1 |
Academic general practice/family medicine in times of COVID-19 – Perspective of WONCA Europe | c61a9804-964d-45f0-a42b-4a080793d67f | 7751383 | Family Medicine[mh] | The SARS-COV-2 infection has hit almost all countries of the world in an unprecedented manner. In just a few days, the functioning of individuals and entire societies changed drastically, and their health and lives were seriously threatened. Nowadays, health care systems have been the focus of attention all over the world. Many eyes are directed on family doctors, who stand on the front line of the fight against the virus. At the beginning of the outbreak, many of them paid the highest sacrifice with their own lives in the fight against the global pandemic. Several articles have been published on various clinical aspects of COVID-19. In this background article, we want to discuss the risks and challenges that the current pandemic presents to academic aspects of General Practice/Family Medicine in Europe. Precisely, we want to focus on the challenges in the field of education, research, and quality assurance – three main academic pillars of our discipline. Moreover, we want to present an overview of the efforts undertaken by the European Region of the World Organisation of National Colleges, Academies, and Academic Associations of General Practitioners/Family Physicians (WONCA Europe) to support the Primary Health Care doctors in their fight against the pandemic.
The COVID-19 pandemic has also significantly affected teaching in all levels of medical education. Most medical schools have suspended regular classes for students, moving them from classrooms to the internet. Overnight, virtual-learning, video-conferencing, social media contacts, and broadly understood telemedicine have become substitutes for traditional medical education . In fear of spreading the virus, rotations in wards and outpatient clinics were also stopped, limiting the possibility for students and young doctors to gain knowledge and broaden competencies through direct contact with real patients. Such decisions are primarily dictated by the fear that frequent rotations of students and residents will cause them to become potential infection vectors . In many places, however, both students and residents joined the fight against pandemics, acquiring new competencies that may prove extremely useful in the future. Suspending clinical classes or replacing them with virtual teaching raises concerns about the quality of education and the competencies of graduates . On the other hand, however, significant delays in the didactic process, especially in the case of students and residents of final years of training, may result in a delay of subsequent groups of doctors in entering the health care system, which in many countries in Europe may significantly aggravate already large shortages of medical staff. The introduction of blended medical education, which has been postulated for many years, has accelerated rapidly during the pandemic with a visible shift of focus to forms of distance learning. The necessity of virtual education poses new challenges not only for students but also and perhaps above all, for teachers . Some of them, especially those belonging to the older generation and teaching clinical competence in the GP practice, have significant difficulties in conducting distance learning. An important challenge is also the need to reconcile the organisation of reliable and credible exams with the need to maintain social distance. The immediate stoppage of traditional medical education in many places was accompanied by the sudden appearance of new solutions in this area. Their nature, scope, and consequences are currently completely unknown and are the subject of ongoing research .
During the initial six months of the COVID-19 pandemic, approximately 18,000 publications related to the topic were listed on PubMed . A quick search based on keywords revealed a minimal number of around 170 publications associated with COVID-19 and general practice. Most are position papers, recommendations, or non-systematic reviews. The main topics which are covered by the current published literature are: (i) telemedicine and remote care , (ii) monitoring and self-assessment of possible symptoms , (iii) medical education , (iv) training to deal with COVID-19 in the practice , (v) the current situation in general practice , (vi) how general practice is expected to change during and after the COVID-19 pandemic and (vii) the burden COVID-19 brings to health care providers , but general practitioners were not well represented in most of the surveys assessing this . Also, papers describing COVID-19 cases’ characteristics from outpatient settings are scarce . Due to the dynamic situation during the last months and the relatively short time after the onset of the pandemic, there is a lack of original data, especially from general practice settings. This contrasts with the vital role of general practitioners and the needed resources at the frontline of contact with possibly COVID-19 positive persons . New public health challenges also present a different area of research, for example, the tracing and testing of persons with COVID-19 contact, assessing the effects of lockdown measures and their loosening, but also researching and potentially amending their adverse effects, for example, on mental health, violence in households/families, or on economic safety and prosperity . Specific research questions arise from the current state and the current lack of knowledge and need the contribution of GPs to be answered: What is the burden of COVID-19 for general practitioners and their teams? What are the strategies to protect general practitioners and their teams effectively and efficiently? What could future strategies for COVID-19 testing be? Which were and are the direct and indirect effects of COVID-19 on morbidity and mortality? What approaches could GPs take to cover up with the unmet health needs? What was the effectiveness of measurements taken to control COVID-19? What could be a possible long-term strategy to deal with COVD-19 in populations? How to deal with possible COVID-19 infections during phases of high upper respiratory tract infections (URTI) incidence due to a broader spectrum of viruses? Since it is only possible to work on some aspects retrospectively, it is now the time to conceptualise and plan the investigations. The gained knowledge may be helpful in future waves of the COVID-19 pandemic or even other epidemics.
Maintaining the quality of care and safe management of all patients in primary care faced several challenges during the COVID-19 pandemic. There have been several steps in the approach used by the guidelines for family medicine: first, a patient examination by family doctors was disapproved. Family doctors used phone interviews to assess their patients and advised them to stay at home . In severe cases, patients were referred to the hospital. Some weeks later, the second step recommended that family doctors examine patients without any testing available. At this stage, family doctors learned about the main features of COVID-19 through direct observation. In the third stage, blood tests and X-rays were used to assess the severity of the patients’ conditions to decide better which patients need a referral to the hospital. Most recently, point-of-care testing with polymerase chain reaction (PCR) test has been used. These rapidly changing tasks assigned to GPs at different stages of the fight against a pandemic challenge the quality of their care. As a result of the need to halt the spreading of the disease and protect healthcare workers, most primary care patients were consulted remotely. The COVID-19 pandemic transformed healthcare systems worldwide, with telemedicine, or virtual healthcare, being one of the key revolutions . Providers with an existing telehealth infrastructure experienced slow implementation of new technologies in the past. During the recent pandemic, rapid adoption of telemedicine by both patients and providers occurred. Obstacles like reimbursement issues, lack of comfort with telemedicine technologies of patients and doctors, and a perceived need for telemedicine limited to remote rural areas have been turned around in the face of the need for social distancing and quarantine as a way to stop the pandemic. Options for telemedicine are diverse and may include written, audio or video communication between the patient and the family physician. Written communication may consist of sharing documents or even photos in a secured transferring system. Traditional telephone consultations were used by family doctors long before the COVID-19 pandemic, but modern ‘remote’ medicine is conducted through a secure video meeting along with the use of advanced remote technology for physical examination and vital signs monitoring. Digital remote examination tools, such as the digital stethoscope and otoscope, have been incorporated into clinical practice and maybe complementary for video telemedicine solutions . The use of telehealth as ‘forward triage’ – the sorting of patients before they arrive at the practice allows patients to be efficiently screened for COVID-19 , but the opportunity for physicians and patients to communicate 24 h a day, using smartphones or webcam-enabled computers challenges the classic role of the tight long-term doctor–patient relationship – one of the pillars of family medicine. COVID-19 accelerating the adoption of telehealth may improve health care access in remote areas, but it may also reduce the motivation to invest significant resources in rural infrastructure . Previous studies suggest that phone consultations can be used in some situations (i.e. follow-up) , but have many safety issues (i.e. difficult communication, absence of physical examination, lack of comprehensive approach) ; E-consultations and video consultations may improve some aspects of care delivery, but improvements need to be done to manage the safety of such consultations . Remote consultations resulted in remote prescribing. Here, the danger of both over- and deprescribing emerged. In ordinary times, deprescribing should be considered when the potential for harm outweighs the benefit of medicine . However, there is currently not enough evidence to support wholesale changes to patients’ medication, since such action requires careful assessment, follow-up, and safety-netting . Overprescribing for example of antibiotics, analgesics, and benzodiazepines might occur due to new restrictions in the number of face-to-face consultations performed. The ongoing pandemic also threatened equity in health care. Certain vulnerable populations (i.e. people with disabilities, or people with insecure housing conditions) may be impacted more significantly by COVID-19. This can be mitigated if simple actions and protective measures are taken by key stakeholders . Public health and social measures must be tailored to local structures, conditions, and epidemiology. Another potential threat is the underserviced patients with problems not related to COVID-19. While most attention was given to potential COVID-19 patients, other patients requiring care were at risk of being left behind, seriously questioning the equity and safety of care. Therefore, maintenance of regular services for the entire population of patients cared for by GPs should be one of the critical priorities .
During the pandemic, WONCA Europe engaged in several different areas: the spread of information, education, research, and cooperation/visibility. WONCA Europe wanted to keep the flow of information to its members and this was ensured by establishing the COVID-19 resource page at the beginning of March 2020 . The information is targeted for use by general practitioners/family physicians in Europe. WONCA Europe also recommended prioritising guidance from the country’s specific local health authorities. A presidential letter on the pandemic was sent to all member organisations . Additionally, information was spread through a newly established newsletter, with current topics sent to all member organisations. World Patients Safety day was celebrated by WONCA Europe and Association for quality and safety in family medicine (EQuiP) by conveying the message on this year’s topic (workers’ health) and a short video on patient safety . Several other activities were also delivered to spread the news, such as an interview with the president, a video message by the president on COVID-19, and professional articles . WONCA Europe also aimed to offer education on various topics associated with the pandemic. Several webinars were held on primary care during COVID-19. Most were organised jointly between WONCA Europe and other organisations or networks, such as WHO, the European Forum for Primary Care (EFPC) and others. As there is a lack of evidence on primary care during the pandemic, WONCA Europe and its networks also engaged in research. EURACT investigated education experiences during the pandemic. Currently, a large study on quality and safety during the pandemic is underway, with the cooperation between Ghent University and EQuiP. WONCA Europe put efforts into cooperation with key stakeholders and its visibility with outside member organisations. It signed and realised the statement on quality and safety (with EQuiP), on COVID pandemic with EFPC and with Vasco da Gama, and on digital health and telemedicine for primary care with the WHO Regional Committee for Europe. A meeting was held with WHO Europe on the topic of family physicians and the pandemic: a way forward, trying to draw common WHO/WONCA Europe conclusions and suggestions for the future are goals. Meetings were also held with the European Medicines Agency (EMA), the European Union of General Practitioners (UEMO), and the European Cancer Organisation (ECCO), to cope with the future challenges in different areas. WONCA Europe also started a process to identify the core values of family medicine. A special working group is assessing available document, discussing the possible effects that changing communities and the changing world are having on the core values of the discipline, and developing a list of core values. More information about WONCA Europe initiatives related to the academic aspects of General Practice/Family Medicine in relation to the ongoing pandemic is available on the WONCA Europe website .
Future challenges for the academic community are associated with three main pillars of the discipline of family medicine: education, research, and quality and safety of clinical work. Education in family medicine in Europe is mostly performed according to six main competencies of family medicine and incorporated into the EURACT Educational Agenda for family medicine/general practice . The times of the pandemic are significantly changing the content and methods of teaching. Therefore, the content of teaching should be adapted to the needs of family physicians and new methods of teaching (such as online lectures and online examinations) should be evaluated/validated for their use in teaching. Teachers must be taught about these changes to gain new competencies to cope with all the changes adequately. All these measures will also assure the quality of teaching in the future. Currently, there is a lack of evidence from primary care regarding not only the management of COVID-19 patients but also regarding the management of all other patients. Therefore, studies are needed to gain evidence, which will enable writing evidence-based guidelines, ultimately leading to quality and safety in the family medicine practice. A challenge here is gaining the required finances for research. Here, the academic community should connect with important stakeholders to acquire enough resources for performing high-quality research involving all European countries. In the clinical area, the challenges are how to ensure high-quality care for all patients, taking into account the rise of remote consultations, the assurance of the safety of health workers and patients, and the lack of healthcare workers that was evident even before the pandemic. New models of work will emerge and it will be the role of the academic community to direct and prioritise the usefulness of the new organisational models. Professionals from different fields of health care will need to cooperate to ensure the most efficient interdisciplinary approach. Health workers’ safety and well-being is another challenge. Effective models of coping with anxiety, stress, fear for their own safety, health, and the lives of their relatives and patients must be developed. Physician burnout is well-documented in the literature and is marked by a feeling of a lack of agency, detachment, and disillusionment and can lead to irritability, poor decisions, and have negative impacts on interpersonal relationships . Untreated and unmanaged conditions can lead to mental health issues, including depression and substance misuse for the doctor. Psychological threats might be especially dangerous for residents and other junior physicians. Family doctors early in their career need to be especially careful, as they are often given more workload than seniors and are still developing their own coping strategies . Care ethics specify that to avoid burnout, we need to foster an environment where there is time to stop and reflect to understand our own boundaries and limits .
The pandemic has challenged medical education, research, and quality assurance in General Practice/Family Medicine. These areas have to adapt to the new needs and requirements that emerged in times of COVID-19. The pandemic can serve as an opportunity to scale-up primary care, mainly in the areas of interprofessional care, remote consultations, and empowering people for self-care, thus achieving a higher quality of care, reducing the workload on the physicians, and reducing health care expenses. It has also highlighted the importance of physician well-being and we must continue the discussion on ‘safe staffing, safe resources and safe models of care’ long after the pandemic. Family doctors across Europe are facing new clinical challenges and have to adapt their practices to the unique circumstances created by COVID-19. Academic general practice has to follow and support these changes. WONCA Europe and all its network organisations are ready to join and lead this process.
|
Causal inference in multi-state models–sickness absence and work for 1145 participants after work rehabilitation | 0d65c2b0-4757-44fb-986a-14685f533c15 | 4619267 | Preventive Medicine[mh] | Data on sickness benefits is a valuable source for analysing sick leaves, disability and employment, but due to the complexity of such data the choice of measurement type and analysis can be challenging . However, recent work using data from Norwegian and Danish registries has proved that multi-state models can be a very successful framework for analysing this kind of data. For example, when studying the effect of participating in work rehabilitation programs, events such as return to work, onset of sick leave benefits or work assessment allowance can hardly be seen as single time-to-event outcomes, but rather as a set of events which define states that the individuals move between. Multi-state modelling, as an extension of traditional survival analysis, offers a unified approach to the modelling of the transitions between such states. National registries with data on sickness benefits is a good basis for many types of analyses. The data are typically complete, and detailed information is collected on the type of benefits and dates when they are given. Additional information on the individuals receiving benefits is often available or can be obtained in even greater detail by coupling such registry data with cohort data where detailed information is available. The assessment of possible interventions with the purpose of reducing sickness absence is an important aim when analysing sickness benefit data, and identifying successful interventions could have a possible large economic impact . In this paper we will focus on two such interventions, which both have received a lot of attention. One is the effect of partial compared to full time sick leave benefits, see e.g. , and the other is the effect of a cooperation agreement on a more inclusive working life, see e.g. . In the Nordic countries there have been political initiatives for expanded use of partial sick leave. Part-time work may be beneficial, and a feasible way to integrate individuals with reduced work ability in working life, if the alternative is complete absence from work . In Norway, an agreement on more inclusive working life was signed by the Government and the social partners in employers and employees’ organisations in 2001, and was renewed in 2005, 2010 and 2014. One of the main aims of this tripartite agreement has been to reduce the amount of individuals on sick leave and disability pension. Even though some attempts have been made to conduct randomized trials to assess interventions for reducing sick leave , the execution of such experiments is challenging and not very commonly seen. As for using observational data to identify the effect of such interventions, numerous attempts have being made, see e.g. . There has also been a massive methodological development over the last decades within the field of causal inference , providing a formal framework for identifying parameters similar to those in randomized trials from observational data. Such methods can also be employed in a multi-state model setting, but this has hardly been done yet. Earlier work on multi-state models for Norwegian registry data on sick leave benefits has also been in the form of cohort follow-up studies , but without using the detailed covariate information available in these cohorts. In this paper we extend the analysis of Øyeflaten et al. , analysing transitions between sick leave benefits, work assessment allowance, disability pension and work for patients participating in work rehabilitation programs. Formally, we make three extensions to the analyses in the original paper. First of all, we cover a larger multi-center cohort, about double in size. Secondly, we utilize the detailed covariate information which is available for this cohort to estimate covariate specific state transition probabilities. Doing this, both proportional hazards and additive hazards models are here being considered for the purpose of estimating the transition intensities. Last but not least, we explore three different approaches based on classical methods from the causal inference literature to estimate the effect of interventions in multi-state models. The purpose of this paper is therefore twofold; to use multi-state models to study sickness absence and work based on detailed covariate information for a cohort of participants after work rehabilitation, and, to illustrate how methods from the causal inference literature can be used to estimate the effect of interventions in such a multi-state model framework. Detailed covariate information is, of course, central in making covariate specific predictions in a multi-state model, but even more important when estimating the causal effects of interventions from observational data. The statistical and causal assumptions needed will be discussed specifically. Covariate information has been used in multi-state models before for predicting sick leaves and related outcomes, in two recent papers on Danish data . The main difference between the data in these studies and the data in the present study is that the Danish data cover a much larger cohort, while the Norwegian data include more detailed information on the health of the participants. The latter is important for precise patient predictions and for adjusting for confounding when aiming at drawing causal conclusions. None of the earlier studies consider the estimation of causal effects of interventions in a multi-state setting. With the increasing attention on multi-state modelling of event-history data, more and more software packages have been made available, especially in R ; for example the mstate , msm and msSurv packages. See the latter, or the books of Beyersmann et al. and Willekens , for detailed overviews of available R packages. The computations in this paper has been performed in R using the surv and mstate packages and by standalone code written by the first author. Data sources A multi-center cohort The patients being analyzed are part of a multi-center cohort study with the purpose of studying how health complaints, functional ability and fear avoidance beliefs explain future disability and return-to-work for patients participating in work rehabilitation programs. Data has been collected on 1155 participants from eight different clinics offering comprehensive inpatient work rehabilitation. Mean time on sickness benefits during the last two years before admittance to the work rehabilitation program, were 10 months (SD = 6.7). All participants gave informed consent, allowing for follow-up data on sickness absence benefits to be obtained from national registries, and answered comprehensive questionnaires during their stay at the clinic. The study was approved by the Medical Ethics Committee; Region West in Norway (REK-vest ID 3.2007.178) and the Norwegian social science data services (NSD, ID 16139). The data collected through questionnaires includes various background information together with detailed health variables such as subjective health complaints, physical function, coping and interaction abilities, and fear-avoidance beliefs. See Øyeflaten et al. for more details on the cohort. Data on sickness benefits All Norwegian employees are entitled to sickness benefits such as sick leave benefits, work assessment allowance or disability benefits, if incapable of working due to disease or injury. The employer pays for the first 16 days of a sick leave period, and thereafter The Norwegian Labour and Welfare Administration (NAV) covers the disbursement. Data on these benefits, both the ones covered by the employer and NAV, was obtained from NAV’s register, which contains information on the start and stop dates of sickness benefits given from 1992 and onward for the entire Norwegian population. Data for current analysis Out of the original 1155 participants in the multi-center cohort study, we excluded 4 individuals with an unknown date of departure from their rehabilitation center, 1 individual who had not answered the relevant questions on subjective health complaints and 5 individuals already on disability pension at baseline, and were left with a study sample of 1145 participants. Baseline was set to the time of departure, which varied between May 16th 2007 and March 25th 2009. Individuals were followed up with regard to their received sickness benefits until July 1st 2012, which was the date of data extraction from NAV. A multi-state model for sickness absence and work The occurrence of an event in survival analysis can be seen as a transition from one state to another, for example from an alive state to a death state. The hazard rate corresponds to the transition intensity between these two states. Multi-state models form a flexible framework allowing for the standard survival model to be extended by adding more than one transition and more than two states. A detailed introduction to multi-state models can be found in review papers such as Hougaard , Commenges , Andersen and Keiding , Putter et al. and Meira-Machado et al. , or the book chapter by Andersen and Pohar Perme . Sickness absence and disability data is a good example of data that are suitable for being modelled within the multi-state framework. Changing between work and being on various types of sickness benefits over time can naturally be perceived as moving between a given set of states. In Norway, employees on partial or full sick leave can be fully compensated through sick leave benefits for up to a year, after which they can be entitled to work assessment allowance. If their underlying health condition provides reasons for it, they may be granted a disability pension or further partial sick leave benefits. The latter is actively recommended by the authorities . Partial sick leave can be graded from 20 to 99 %. Based on these policies we define five states that the participants can move between after being discharged from the rehabilitation centers: work (no received benefits), sick leave, partial sick leave, work assessment allowance and disability pension, and we propose the multi-state model illustrated in Fig. . At baseline, when being discharged from the rehabilitation center, individuals can start in any of the first four states. Individuals are defined as being on sick leave when receiving full sick leave benefits, on partial sick leave when receiving sick leave benefits graded below 100 % and on disability pension when receiving disability pension on unlimited terms. Work assessment allowance is a intermediate benefit typically given between sick leave and disability pension. It is granted for individuals going through medical treatment or rehabilitation, or to others that might benefit from vocational rehabilitation actions. There is an upper limit of four years for receiving work assessment allowance. When individuals do not receive any sickness benefits, they are per definition in work. The only exception is when there are gaps with no benefits before receiving disability pension – as there are no real transitions directly from work to disability, such gaps are attributed to the most recently received benefit. To avoid including non-genuine transitions, benefits with a duration of only one day have been discarded. When there were benefits registered which overlapped in time, the newest registered benefit was used. As for initial states; 178 patients started in the work state (receiving no benefits) after being discharged from the rehabilitation center, 106 were on partial sick leave benefits, 496 on full sick leave benefits and 365 were on work assessment allowance. Disability pension was defined to be an absorbing state in the multi-state model, as few transitions were observed to go out of this state in the original data. The total number of subsequent transitions between the five states within the study window is shown in Table . Covariate information include age at baseline, gender, marital status, whether a cooperation agreement on a more inclusive working life is present, educational level, type of work, income, working ability score when entering rehabilitation and diagnosis group at baseline. All covariates are based on information from the questionnaires, except information on type of diagnosis which is retrieved through the ICPC code when available in NAV’s register, and partly from the cohort data at the time of entering the rehabilitation. The current diagnosis at any given time is defined as the last given diagnosis. Note that these selected covariates only are one out of many possible representations of the information in the original data source, constructed to sufficiently describe the differences between patients. Detailed statistics on the covariates are found in Table . The transition intensities for the 15 transitions in the multi-state model from Fig. were examined using the Nelson-Aalen estimator for marginal transition intensities, and Cox proportional hazards and Aalen additive hazards models for conditional transition intensities using relevant covariate information. Cox and Aalen models were fitted using either the coxph or aareg function in the survival package of the statistical software R . The Nelson-Aalen estimator was calculated by using the coxph function without covariates. Say that X ( t ) denotes the state for an individual at time t . The transition probability matrix P ( s , t ), with elements P hj ( s , t )= P ( X ( t )= j ∣ X ( s )= h ), denoting the transition probability from state h to state j in the time interval ( s , t ], was then estimated by the matrix product-integral formula (1) [12pt]{minimal} $$ }(s,t) = _{u (s,t]} ( + d}}(u)), $$ P ^ ( s , t ) = ∏ u ∈ ( s , t ] I + d A ^ ( u ) , where [12pt]{minimal} $ { {{A}}}(u)$ A ^ ( u ) is the corresponding estimated cumulative transition intensity matrix at time u . The cumulative intensities in [12pt]{minimal} $ { {{A}}}(u)$ A ^ ( u ) are estimated using the Nelson-Aalen estimator. The cumulative transition intensity matrix could also be estimated conditioning on covariates Z , changing the formula in Eq. to (2) [12pt]{minimal} $$ }_{Z}(s,t) = _{u (s,t]} ( + d}}_{Z}(u)), $$ P ^ Z ( s , t ) = ∏ u ∈ ( s , t ] I + d A ^ Z ( u ) , where [12pt]{minimal} $ { {P}}_{Z}(s,t)$ P ^ Z ( s , t ) and [12pt]{minimal} $ { {A}}_{Z}(u)$ Â Z ( u ) are the estimated covariate specific transition probability matrix and cumulative transition intensity matrix respectively. The cumulative intensities in [12pt]{minimal} $ { {A}}_{Z}(u)$ Â Z ( u ) is estimated for given values of Z using Cox proportional hazards models or Aalen additive hazards models. From the estimated transition probability matrix one can study the probabilities of being in state j at time t when starting in state h at baseline, [12pt]{minimal} $ {P}_{ {hj}}(0,t)$ P ^ hj ( 0 , t ) , or the overall probability of being in state j at time t , (3) [12pt]{minimal} $$ (X(t) = j) = _{k} _{kj}(0,t) (X(0)=k). $$ P ^ ( X ( t ) = j ) = ∑ k P ^ kj ( 0 , t ) · P ^ X ( 0 ) = k . For models without covariates, P ( X (0)= k ) can be estimated by the proportion starting in state k . With covariates, it can be estimated using logistic regression. With cumulative hazard estimates from the Nelson-Aalen estimator, the formula in corresponds to the Aalen-Johansen estimator. With this marginal approach or with covariate adjusted cumulative hazards like in Eq. estimated using Cox proportional hazards models, estimates and confidence intervals were calculated using the mstate package . Using cumulative hazard estimates from Aalen additive hazards models, the estimator from Eq. has to be implemented separately. Confidence intervals can be calculated using bootstrap methods or analytically as described in Aalen, Borgan and Gjessing (, p. 183). Note that there is an intrinsic Markov assumption in this way of multi-state modelling which can be challenging when using complex data such as data based on sick leave and disability benefits. When the length of stay in a state affects the intensity for leaving the state, this assumption is in principal being violated. This is the case in three of the states in our multi-state model due to administrative regulations. Individuals can only be on sick leave or partial sick leave spells of maximum one year, and on work assessment allowance for a maximum of four years. To what degree such violations pose a problem will however depend on how often individuals stay in these states long enough for the regulations to take effect, which again partly depend on the follow-up time of the study. In our study we have individual follow-up times ranging between three and five year, which means that the maximum time of four years for work assessment allowance not will pose a problem. In fact, the mean length of stay in this state is 274 days (with a 95 % percentile of 1028 days). Also sick leave and partial sick leave spells close to a year is very rare in our study population, with a mean stay of 38 days on sick leave and 68 days on partial sick leave (and corresponding 95 % percentils of 180 and 218 days). Overall, this seems to indicate that while serious violations to the Markov assumption are possible, they are in practice uncommon and should not make any big impacts on the results for our study. However, in general one should be aware that violations of this assumption may impact some of the estimated effects, including the causal parameters of interest. Note also that more advance models relaxing the Markov assumption have been developed, but the impact of such violations will vary and could often be disregarded. See for example Gunnes et al. and Allignol et al. , who only show small discrepancies between Markov and non-Markov models in situations where the Markov assumption is not met. When focusing on overall state occupation probabilities as in Eq. , Datta and Satten have showed that the product-integral estimator in Eq. is consistent regardless of whether the Markov assumption is being valid. Causal inference and the effect of interventions in multi-state models Besides estimating transition intensities and probabilities for a given set of states in a multi-state model and doing individual predictions, it is also of interest to evaluate population average effects of interventions in the multi-state model framework. There is a fundamental difference between merely predicting covariate specific outcomes and to estimate the causal effect of intervention on them, which creates a need for special methods and assumptions. We now consider three different approaches based on classical methods from the causal inference literature. The methods are exemplified with regard to the two types of possible interventions mentioned in the Introduction. The first intervention is the use of partial versus full time sick leave, where partial sick leave often is thought to cause shorter absence and higher subsequent employment . The other intervention is the use of cooperation agreements on more inclusive working life, which in Norway has been implemented with the goal of improving work environment, enhance presence at work, prevent and reduce sick leave and prevent exclusion and withdrawal from working life. A secondary aim is to prevent withdrawal and to increase employment of people with impaired functional ability. Participating enterprises must systematically carry out health and safety measures, with inclusive working life as an integral part, and will in return receive prevention and facilitation subsidies and have their own contact person at NAV . Note that the first of these two interventions is represented through states in our multi-state model in Fig. , while the latter is represented as an additional covariate as shown in Table . As for causal assumptions we will focus on the three general conditions which have been identified for estimating average causal effects; positivity, exchangeability (“no unmeasured confounding”) and consistency (“well-defined interventions”) . We will also discuss how the related modularity condition, e.g. from the Pearl framework of causal inference , is relevant in our context of multi-state models. Additionally, as always, we need the statistical assumptions of no model-misclassification, which in our case is important both at an intensity and overall multi-state level. The importance and validity of all these assumptions are discussed separately for the three different approaches in following sub sections. Artificially manipulating transition intensities One proposed method for making causal inference in multi-state models is to artificially change certain transition intensities in [12pt]{minimal} $ { {{A}}}(u)$ A ^ ( u ) and then explore the corresponding hypothetical transition probabilities . Such changes in transition intensities, creating a new transition intensity matrix which can be denoted [12pt]{minimal} $ { {{A}}}(u)$ A ~ ( u ) , may represent interventions. The hypothetical transition probabilities, which we can denote [12pt]{minimal} $ { {{P}}}(s,t)$ P ~ ( s , t ) , may then represent counterfactual outcomes. Confidence intervals for such hypothetical transition probabilities can be found through the distribution of the cumulative intensities after manipulation. For situations without covariates and for the additive hazards model this will follow by the arguments in Aalen, Borgan and Gjessing (, p. 123–126 and 181–190). For the Cox model it will follow by the functional delta method in Andersen, Borgan, Gill & Keiding (, p. 512–515). For more on these types of analyses with respect to causal inference, and especially the connection to G-computation, see Keiding et al. and Aalen, Borgan and Gjessing (, p. 382). The important causal assumption for this approach to be reasonable is that when intervening on a set of transition intensities, the remaining transition intensities stay unchanged. This is equivalent to the modularity assumption and definition of a structural causal model in the Pearl framework of causal inference . See Aalen et al. and Røysland for more on modularity in the light of intensity processes. However, even when it is unreasonable that such an assumption is fully met, it has been argued that this kind of inference in multi-state models still can give valuable insights (, p. 250). In this paper we will follow the ideas from Keiding et al. for our multi-state model for sickness absence and work in Fig. , and define interventions through manipulating transition rates within given sets of covariate values, where such interventions would be realistic. One example of an intervention would be to increase the use of partial sick leave compared to full sick leave, which would correspond to modifying the intensities into the partial sick leave and sick leave states. For the modularity assumption to be met in this case, it means that the additional individuals counterfactually put on partial sick leave instead of full sick leave, should behave identical to those individuals who were observed on partial sick leave in the original data. As those on partial sick leave generally are in a better health state than those on full sick leave, this is not a reasonable assumption. However, it is reasonable within similar stratums of covariate levels, which we will study in later in this paper. Satisfying the condition of modularity in this manner, also will imply that the assumptions of positivity, exchangeability and consistency are met. Inverse probability weighting Another approach from the causal inference literature is inverse probability of treatment (or propensity score) weighting . The treatment or exposure of interest can be represented either as states in the multi-state model or through additional covariates. One could for example weight by the inverse probability of being in a given state at baseline, before estimating the transition intensities of the model in Fig. . This would correspond to modelling a counterfactual scenario where there is a copy of each individual in every possible initial state. The sufficient conditions for this approach to be valid is again the causal assumptions of positivity, exchangeability and consistency. Positivity here means that there should be a non-zero probability of receiving all possible exposures for all covariate values in the population. Also, the model for the exposure, which is the foundation for the weights, must be well specified. See for example for a further discussion on these assumptions. Say that we would like to compare the effect of being put on sick leave versus partial sick leave at baseline (when being discharged from the rehabilitation center). Let us for now only consider those starting in either of these two states. Whether an individual is put on full or partial sick leave at baseline is hardly randomized. We could however model the counterfactual situation where everyone, regardless of their covariate information, were put on full sick leave at baseline and an identical copy of each individual were placed on part time sick leave. This can be achieved by applying the weights [12pt]{minimal} $$ w_{k} = = s_{k}|Z_{k} = z_{k})}, $$ w k = 1 P ( S k = s k | Z k = z k ) , where S k is the initial state and Z k is all the relevant covariate information explaining the initial state for individual k . The probabilities of being in either of the two states at baseline can be estimated using ordinary logistic regression. The uncertainty of the estimates from the resulting weighted multi-state analysis can easily be calculated using for example the coxph function in R with robust standard errors . Another casual contrast of interest would be to compare the scenario where everyone got a cooperation agreement on a more inclusive working life with a scenario where no-one had such an agreement. This would correspond to modelling a situation where such agreements were randomized. This could be modelled by weighting every individual in the original data with the inverse probability of having a cooperation agreement on a more including working life given covariates, by applying the weights [12pt]{minimal} $$ w_{k} = = e_{k}|Z_{k} = z_{k})}, $$ w k = 1 P ( E k = e k | Z k = z k ) , where E k is an indicator variable that is 1 if an agreement is present and 0 otherwise. The probabilities can again be estimated using logistic regression. Assuming positivity for the first type of intervention means that there should be a probability greater than zero for starting in either of the two states of sick leave or partial sick leave at baseline, regardless of any observed covariate history. This is testable, and the covariates in Table are well balanced over the two groups. The biggest difference lies in the distribution of the working ability score, but even in the partial sick leave group 5 % of the individuals has a low ability score (the lowest health score). As for exchangeability it is a question of whether the included covariates sufficiently explain the differences between those on full and partial sick leave at baseline. The covariates include demographic, socioeconomic, work and health variables, which should be the central parameters. However, to what degree they are sufficiently covered is untestable. The health variable should ideally had been collected at baseline, and not at the first measurement after entering the rehab, but one could hope that in combination with type of work and diagnosis group, it will still be sufficient. An example of a variable that was considered, but not included, is the center that the patients attended. Adding this information, which involves adding 7 new dummy variables, seemed to have little impact. We therefore assume that center specific differences between patients are covered sufficiently through the other covariates, and especially working ability score and diagnosis group. For the cooperation agreement intervention, this is not administered at an individual level, and thus the assumptions are even easier to assess. There are no covariate combinations that exclude such agreements and the most important confounder will be type of work. Both interventions can also be assumed well-specified. G-computation A third approach, which corresponds to G-computation (or standardization) of the parameter from the inverse probability weighting, is to estimate the transition intensities for individual k conditioned on all relevant covariate information Z k using a Cox proportional hazards or an Aalen additive hazards model, and then predict the state transition probabilities given covariates Z , P hj ( s , t ∣ Z ), for every individual given a specific intervention. As for the inverse probability weighting approach, the intervention could be defined both through setting a specific initial state or a covariate to a specific value. The main causal assumptions are again positivity, exchangeability and consistency, together with the assumption of no model misspecification. However, the model which needs to be correctly specified is now the model for the outcome, and not a model for the exposure as for the inverse probability approach. See for example for a discussion on the causal assumptions of G-computation. For a general discussion on the use of inverse probability weighting and G-computation, and the connection to standardisation, see . If, again, we would like to compare the effect of being put on sick leave versus partial sick leave at baseline, the intervention would correspond to setting their initial state to h =2 and h =3, and compare all individual predictions for both values. The population average effect can then be estimated through [12pt]{minimal} $$ _{k} _{3,j}(0,t Z_{k}) - _{k} _{2,j}(0,t Z_{k}), $$ 1 n ∑ k P ^ 3 , j ( 0 , t ∣ Z k ) − 1 n ∑ k P ^ 2 , j ( 0 , t ∣ Z k ) , where n is the number of individuals in the study. Confidence intervals can be found using standard bootstrap techniques. Correspondingly, if we consider an intervention such as the cooperation agreement on a more inclusive working life, represented by a binary covariate E k , the population average effect of such an intervention can be estimate by (4) [12pt]{minimal} $$ _{k} _{i,j}(0,t Z_{k}^{E_{k}=1}) - _{k} _{i,j}(0,t Z_{k}^{E_{k}=0}), $$ 1 n ∑ k P ^ i , j 0 , t ∣ Z k E k = 1 − 1 n ∑ k P ^ i , j 0 , t ∣ Z k E k = 0 , for given initial states i . As these interventions are the same as the ones in question for the inverse probability approach, the causal assumptions need are also identical. See the discussion of these assumptions in the previous sub section. A multi-center cohort The patients being analyzed are part of a multi-center cohort study with the purpose of studying how health complaints, functional ability and fear avoidance beliefs explain future disability and return-to-work for patients participating in work rehabilitation programs. Data has been collected on 1155 participants from eight different clinics offering comprehensive inpatient work rehabilitation. Mean time on sickness benefits during the last two years before admittance to the work rehabilitation program, were 10 months (SD = 6.7). All participants gave informed consent, allowing for follow-up data on sickness absence benefits to be obtained from national registries, and answered comprehensive questionnaires during their stay at the clinic. The study was approved by the Medical Ethics Committee; Region West in Norway (REK-vest ID 3.2007.178) and the Norwegian social science data services (NSD, ID 16139). The data collected through questionnaires includes various background information together with detailed health variables such as subjective health complaints, physical function, coping and interaction abilities, and fear-avoidance beliefs. See Øyeflaten et al. for more details on the cohort. Data on sickness benefits All Norwegian employees are entitled to sickness benefits such as sick leave benefits, work assessment allowance or disability benefits, if incapable of working due to disease or injury. The employer pays for the first 16 days of a sick leave period, and thereafter The Norwegian Labour and Welfare Administration (NAV) covers the disbursement. Data on these benefits, both the ones covered by the employer and NAV, was obtained from NAV’s register, which contains information on the start and stop dates of sickness benefits given from 1992 and onward for the entire Norwegian population. Data for current analysis Out of the original 1155 participants in the multi-center cohort study, we excluded 4 individuals with an unknown date of departure from their rehabilitation center, 1 individual who had not answered the relevant questions on subjective health complaints and 5 individuals already on disability pension at baseline, and were left with a study sample of 1145 participants. Baseline was set to the time of departure, which varied between May 16th 2007 and March 25th 2009. Individuals were followed up with regard to their received sickness benefits until July 1st 2012, which was the date of data extraction from NAV. The patients being analyzed are part of a multi-center cohort study with the purpose of studying how health complaints, functional ability and fear avoidance beliefs explain future disability and return-to-work for patients participating in work rehabilitation programs. Data has been collected on 1155 participants from eight different clinics offering comprehensive inpatient work rehabilitation. Mean time on sickness benefits during the last two years before admittance to the work rehabilitation program, were 10 months (SD = 6.7). All participants gave informed consent, allowing for follow-up data on sickness absence benefits to be obtained from national registries, and answered comprehensive questionnaires during their stay at the clinic. The study was approved by the Medical Ethics Committee; Region West in Norway (REK-vest ID 3.2007.178) and the Norwegian social science data services (NSD, ID 16139). The data collected through questionnaires includes various background information together with detailed health variables such as subjective health complaints, physical function, coping and interaction abilities, and fear-avoidance beliefs. See Øyeflaten et al. for more details on the cohort. All Norwegian employees are entitled to sickness benefits such as sick leave benefits, work assessment allowance or disability benefits, if incapable of working due to disease or injury. The employer pays for the first 16 days of a sick leave period, and thereafter The Norwegian Labour and Welfare Administration (NAV) covers the disbursement. Data on these benefits, both the ones covered by the employer and NAV, was obtained from NAV’s register, which contains information on the start and stop dates of sickness benefits given from 1992 and onward for the entire Norwegian population. Out of the original 1155 participants in the multi-center cohort study, we excluded 4 individuals with an unknown date of departure from their rehabilitation center, 1 individual who had not answered the relevant questions on subjective health complaints and 5 individuals already on disability pension at baseline, and were left with a study sample of 1145 participants. Baseline was set to the time of departure, which varied between May 16th 2007 and March 25th 2009. Individuals were followed up with regard to their received sickness benefits until July 1st 2012, which was the date of data extraction from NAV. The occurrence of an event in survival analysis can be seen as a transition from one state to another, for example from an alive state to a death state. The hazard rate corresponds to the transition intensity between these two states. Multi-state models form a flexible framework allowing for the standard survival model to be extended by adding more than one transition and more than two states. A detailed introduction to multi-state models can be found in review papers such as Hougaard , Commenges , Andersen and Keiding , Putter et al. and Meira-Machado et al. , or the book chapter by Andersen and Pohar Perme . Sickness absence and disability data is a good example of data that are suitable for being modelled within the multi-state framework. Changing between work and being on various types of sickness benefits over time can naturally be perceived as moving between a given set of states. In Norway, employees on partial or full sick leave can be fully compensated through sick leave benefits for up to a year, after which they can be entitled to work assessment allowance. If their underlying health condition provides reasons for it, they may be granted a disability pension or further partial sick leave benefits. The latter is actively recommended by the authorities . Partial sick leave can be graded from 20 to 99 %. Based on these policies we define five states that the participants can move between after being discharged from the rehabilitation centers: work (no received benefits), sick leave, partial sick leave, work assessment allowance and disability pension, and we propose the multi-state model illustrated in Fig. . At baseline, when being discharged from the rehabilitation center, individuals can start in any of the first four states. Individuals are defined as being on sick leave when receiving full sick leave benefits, on partial sick leave when receiving sick leave benefits graded below 100 % and on disability pension when receiving disability pension on unlimited terms. Work assessment allowance is a intermediate benefit typically given between sick leave and disability pension. It is granted for individuals going through medical treatment or rehabilitation, or to others that might benefit from vocational rehabilitation actions. There is an upper limit of four years for receiving work assessment allowance. When individuals do not receive any sickness benefits, they are per definition in work. The only exception is when there are gaps with no benefits before receiving disability pension – as there are no real transitions directly from work to disability, such gaps are attributed to the most recently received benefit. To avoid including non-genuine transitions, benefits with a duration of only one day have been discarded. When there were benefits registered which overlapped in time, the newest registered benefit was used. As for initial states; 178 patients started in the work state (receiving no benefits) after being discharged from the rehabilitation center, 106 were on partial sick leave benefits, 496 on full sick leave benefits and 365 were on work assessment allowance. Disability pension was defined to be an absorbing state in the multi-state model, as few transitions were observed to go out of this state in the original data. The total number of subsequent transitions between the five states within the study window is shown in Table . Covariate information include age at baseline, gender, marital status, whether a cooperation agreement on a more inclusive working life is present, educational level, type of work, income, working ability score when entering rehabilitation and diagnosis group at baseline. All covariates are based on information from the questionnaires, except information on type of diagnosis which is retrieved through the ICPC code when available in NAV’s register, and partly from the cohort data at the time of entering the rehabilitation. The current diagnosis at any given time is defined as the last given diagnosis. Note that these selected covariates only are one out of many possible representations of the information in the original data source, constructed to sufficiently describe the differences between patients. Detailed statistics on the covariates are found in Table . The transition intensities for the 15 transitions in the multi-state model from Fig. were examined using the Nelson-Aalen estimator for marginal transition intensities, and Cox proportional hazards and Aalen additive hazards models for conditional transition intensities using relevant covariate information. Cox and Aalen models were fitted using either the coxph or aareg function in the survival package of the statistical software R . The Nelson-Aalen estimator was calculated by using the coxph function without covariates. Say that X ( t ) denotes the state for an individual at time t . The transition probability matrix P ( s , t ), with elements P hj ( s , t )= P ( X ( t )= j ∣ X ( s )= h ), denoting the transition probability from state h to state j in the time interval ( s , t ], was then estimated by the matrix product-integral formula (1) [12pt]{minimal} $$ }(s,t) = _{u (s,t]} ( + d}}(u)), $$ P ^ ( s , t ) = ∏ u ∈ ( s , t ] I + d A ^ ( u ) , where [12pt]{minimal} $ { {{A}}}(u)$ A ^ ( u ) is the corresponding estimated cumulative transition intensity matrix at time u . The cumulative intensities in [12pt]{minimal} $ { {{A}}}(u)$ A ^ ( u ) are estimated using the Nelson-Aalen estimator. The cumulative transition intensity matrix could also be estimated conditioning on covariates Z , changing the formula in Eq. to (2) [12pt]{minimal} $$ }_{Z}(s,t) = _{u (s,t]} ( + d}}_{Z}(u)), $$ P ^ Z ( s , t ) = ∏ u ∈ ( s , t ] I + d A ^ Z ( u ) , where [12pt]{minimal} $ { {P}}_{Z}(s,t)$ P ^ Z ( s , t ) and [12pt]{minimal} $ { {A}}_{Z}(u)$ Â Z ( u ) are the estimated covariate specific transition probability matrix and cumulative transition intensity matrix respectively. The cumulative intensities in [12pt]{minimal} $ { {A}}_{Z}(u)$ Â Z ( u ) is estimated for given values of Z using Cox proportional hazards models or Aalen additive hazards models. From the estimated transition probability matrix one can study the probabilities of being in state j at time t when starting in state h at baseline, [12pt]{minimal} $ {P}_{ {hj}}(0,t)$ P ^ hj ( 0 , t ) , or the overall probability of being in state j at time t , (3) [12pt]{minimal} $$ (X(t) = j) = _{k} _{kj}(0,t) (X(0)=k). $$ P ^ ( X ( t ) = j ) = ∑ k P ^ kj ( 0 , t ) · P ^ X ( 0 ) = k . For models without covariates, P ( X (0)= k ) can be estimated by the proportion starting in state k . With covariates, it can be estimated using logistic regression. With cumulative hazard estimates from the Nelson-Aalen estimator, the formula in corresponds to the Aalen-Johansen estimator. With this marginal approach or with covariate adjusted cumulative hazards like in Eq. estimated using Cox proportional hazards models, estimates and confidence intervals were calculated using the mstate package . Using cumulative hazard estimates from Aalen additive hazards models, the estimator from Eq. has to be implemented separately. Confidence intervals can be calculated using bootstrap methods or analytically as described in Aalen, Borgan and Gjessing (, p. 183). Note that there is an intrinsic Markov assumption in this way of multi-state modelling which can be challenging when using complex data such as data based on sick leave and disability benefits. When the length of stay in a state affects the intensity for leaving the state, this assumption is in principal being violated. This is the case in three of the states in our multi-state model due to administrative regulations. Individuals can only be on sick leave or partial sick leave spells of maximum one year, and on work assessment allowance for a maximum of four years. To what degree such violations pose a problem will however depend on how often individuals stay in these states long enough for the regulations to take effect, which again partly depend on the follow-up time of the study. In our study we have individual follow-up times ranging between three and five year, which means that the maximum time of four years for work assessment allowance not will pose a problem. In fact, the mean length of stay in this state is 274 days (with a 95 % percentile of 1028 days). Also sick leave and partial sick leave spells close to a year is very rare in our study population, with a mean stay of 38 days on sick leave and 68 days on partial sick leave (and corresponding 95 % percentils of 180 and 218 days). Overall, this seems to indicate that while serious violations to the Markov assumption are possible, they are in practice uncommon and should not make any big impacts on the results for our study. However, in general one should be aware that violations of this assumption may impact some of the estimated effects, including the causal parameters of interest. Note also that more advance models relaxing the Markov assumption have been developed, but the impact of such violations will vary and could often be disregarded. See for example Gunnes et al. and Allignol et al. , who only show small discrepancies between Markov and non-Markov models in situations where the Markov assumption is not met. When focusing on overall state occupation probabilities as in Eq. , Datta and Satten have showed that the product-integral estimator in Eq. is consistent regardless of whether the Markov assumption is being valid. Besides estimating transition intensities and probabilities for a given set of states in a multi-state model and doing individual predictions, it is also of interest to evaluate population average effects of interventions in the multi-state model framework. There is a fundamental difference between merely predicting covariate specific outcomes and to estimate the causal effect of intervention on them, which creates a need for special methods and assumptions. We now consider three different approaches based on classical methods from the causal inference literature. The methods are exemplified with regard to the two types of possible interventions mentioned in the Introduction. The first intervention is the use of partial versus full time sick leave, where partial sick leave often is thought to cause shorter absence and higher subsequent employment . The other intervention is the use of cooperation agreements on more inclusive working life, which in Norway has been implemented with the goal of improving work environment, enhance presence at work, prevent and reduce sick leave and prevent exclusion and withdrawal from working life. A secondary aim is to prevent withdrawal and to increase employment of people with impaired functional ability. Participating enterprises must systematically carry out health and safety measures, with inclusive working life as an integral part, and will in return receive prevention and facilitation subsidies and have their own contact person at NAV . Note that the first of these two interventions is represented through states in our multi-state model in Fig. , while the latter is represented as an additional covariate as shown in Table . As for causal assumptions we will focus on the three general conditions which have been identified for estimating average causal effects; positivity, exchangeability (“no unmeasured confounding”) and consistency (“well-defined interventions”) . We will also discuss how the related modularity condition, e.g. from the Pearl framework of causal inference , is relevant in our context of multi-state models. Additionally, as always, we need the statistical assumptions of no model-misclassification, which in our case is important both at an intensity and overall multi-state level. The importance and validity of all these assumptions are discussed separately for the three different approaches in following sub sections. Artificially manipulating transition intensities One proposed method for making causal inference in multi-state models is to artificially change certain transition intensities in [12pt]{minimal} $ { {{A}}}(u)$ A ^ ( u ) and then explore the corresponding hypothetical transition probabilities . Such changes in transition intensities, creating a new transition intensity matrix which can be denoted [12pt]{minimal} $ { {{A}}}(u)$ A ~ ( u ) , may represent interventions. The hypothetical transition probabilities, which we can denote [12pt]{minimal} $ { {{P}}}(s,t)$ P ~ ( s , t ) , may then represent counterfactual outcomes. Confidence intervals for such hypothetical transition probabilities can be found through the distribution of the cumulative intensities after manipulation. For situations without covariates and for the additive hazards model this will follow by the arguments in Aalen, Borgan and Gjessing (, p. 123–126 and 181–190). For the Cox model it will follow by the functional delta method in Andersen, Borgan, Gill & Keiding (, p. 512–515). For more on these types of analyses with respect to causal inference, and especially the connection to G-computation, see Keiding et al. and Aalen, Borgan and Gjessing (, p. 382). The important causal assumption for this approach to be reasonable is that when intervening on a set of transition intensities, the remaining transition intensities stay unchanged. This is equivalent to the modularity assumption and definition of a structural causal model in the Pearl framework of causal inference . See Aalen et al. and Røysland for more on modularity in the light of intensity processes. However, even when it is unreasonable that such an assumption is fully met, it has been argued that this kind of inference in multi-state models still can give valuable insights (, p. 250). In this paper we will follow the ideas from Keiding et al. for our multi-state model for sickness absence and work in Fig. , and define interventions through manipulating transition rates within given sets of covariate values, where such interventions would be realistic. One example of an intervention would be to increase the use of partial sick leave compared to full sick leave, which would correspond to modifying the intensities into the partial sick leave and sick leave states. For the modularity assumption to be met in this case, it means that the additional individuals counterfactually put on partial sick leave instead of full sick leave, should behave identical to those individuals who were observed on partial sick leave in the original data. As those on partial sick leave generally are in a better health state than those on full sick leave, this is not a reasonable assumption. However, it is reasonable within similar stratums of covariate levels, which we will study in later in this paper. Satisfying the condition of modularity in this manner, also will imply that the assumptions of positivity, exchangeability and consistency are met. Inverse probability weighting Another approach from the causal inference literature is inverse probability of treatment (or propensity score) weighting . The treatment or exposure of interest can be represented either as states in the multi-state model or through additional covariates. One could for example weight by the inverse probability of being in a given state at baseline, before estimating the transition intensities of the model in Fig. . This would correspond to modelling a counterfactual scenario where there is a copy of each individual in every possible initial state. The sufficient conditions for this approach to be valid is again the causal assumptions of positivity, exchangeability and consistency. Positivity here means that there should be a non-zero probability of receiving all possible exposures for all covariate values in the population. Also, the model for the exposure, which is the foundation for the weights, must be well specified. See for example for a further discussion on these assumptions. Say that we would like to compare the effect of being put on sick leave versus partial sick leave at baseline (when being discharged from the rehabilitation center). Let us for now only consider those starting in either of these two states. Whether an individual is put on full or partial sick leave at baseline is hardly randomized. We could however model the counterfactual situation where everyone, regardless of their covariate information, were put on full sick leave at baseline and an identical copy of each individual were placed on part time sick leave. This can be achieved by applying the weights [12pt]{minimal} $$ w_{k} = = s_{k}|Z_{k} = z_{k})}, $$ w k = 1 P ( S k = s k | Z k = z k ) , where S k is the initial state and Z k is all the relevant covariate information explaining the initial state for individual k . The probabilities of being in either of the two states at baseline can be estimated using ordinary logistic regression. The uncertainty of the estimates from the resulting weighted multi-state analysis can easily be calculated using for example the coxph function in R with robust standard errors . Another casual contrast of interest would be to compare the scenario where everyone got a cooperation agreement on a more inclusive working life with a scenario where no-one had such an agreement. This would correspond to modelling a situation where such agreements were randomized. This could be modelled by weighting every individual in the original data with the inverse probability of having a cooperation agreement on a more including working life given covariates, by applying the weights [12pt]{minimal} $$ w_{k} = = e_{k}|Z_{k} = z_{k})}, $$ w k = 1 P ( E k = e k | Z k = z k ) , where E k is an indicator variable that is 1 if an agreement is present and 0 otherwise. The probabilities can again be estimated using logistic regression. Assuming positivity for the first type of intervention means that there should be a probability greater than zero for starting in either of the two states of sick leave or partial sick leave at baseline, regardless of any observed covariate history. This is testable, and the covariates in Table are well balanced over the two groups. The biggest difference lies in the distribution of the working ability score, but even in the partial sick leave group 5 % of the individuals has a low ability score (the lowest health score). As for exchangeability it is a question of whether the included covariates sufficiently explain the differences between those on full and partial sick leave at baseline. The covariates include demographic, socioeconomic, work and health variables, which should be the central parameters. However, to what degree they are sufficiently covered is untestable. The health variable should ideally had been collected at baseline, and not at the first measurement after entering the rehab, but one could hope that in combination with type of work and diagnosis group, it will still be sufficient. An example of a variable that was considered, but not included, is the center that the patients attended. Adding this information, which involves adding 7 new dummy variables, seemed to have little impact. We therefore assume that center specific differences between patients are covered sufficiently through the other covariates, and especially working ability score and diagnosis group. For the cooperation agreement intervention, this is not administered at an individual level, and thus the assumptions are even easier to assess. There are no covariate combinations that exclude such agreements and the most important confounder will be type of work. Both interventions can also be assumed well-specified. G-computation A third approach, which corresponds to G-computation (or standardization) of the parameter from the inverse probability weighting, is to estimate the transition intensities for individual k conditioned on all relevant covariate information Z k using a Cox proportional hazards or an Aalen additive hazards model, and then predict the state transition probabilities given covariates Z , P hj ( s , t ∣ Z ), for every individual given a specific intervention. As for the inverse probability weighting approach, the intervention could be defined both through setting a specific initial state or a covariate to a specific value. The main causal assumptions are again positivity, exchangeability and consistency, together with the assumption of no model misspecification. However, the model which needs to be correctly specified is now the model for the outcome, and not a model for the exposure as for the inverse probability approach. See for example for a discussion on the causal assumptions of G-computation. For a general discussion on the use of inverse probability weighting and G-computation, and the connection to standardisation, see . If, again, we would like to compare the effect of being put on sick leave versus partial sick leave at baseline, the intervention would correspond to setting their initial state to h =2 and h =3, and compare all individual predictions for both values. The population average effect can then be estimated through [12pt]{minimal} $$ _{k} _{3,j}(0,t Z_{k}) - _{k} _{2,j}(0,t Z_{k}), $$ 1 n ∑ k P ^ 3 , j ( 0 , t ∣ Z k ) − 1 n ∑ k P ^ 2 , j ( 0 , t ∣ Z k ) , where n is the number of individuals in the study. Confidence intervals can be found using standard bootstrap techniques. Correspondingly, if we consider an intervention such as the cooperation agreement on a more inclusive working life, represented by a binary covariate E k , the population average effect of such an intervention can be estimate by (4) [12pt]{minimal} $$ _{k} _{i,j}(0,t Z_{k}^{E_{k}=1}) - _{k} _{i,j}(0,t Z_{k}^{E_{k}=0}), $$ 1 n ∑ k P ^ i , j 0 , t ∣ Z k E k = 1 − 1 n ∑ k P ^ i , j 0 , t ∣ Z k E k = 0 , for given initial states i . As these interventions are the same as the ones in question for the inverse probability approach, the causal assumptions need are also identical. See the discussion of these assumptions in the previous sub section. One proposed method for making causal inference in multi-state models is to artificially change certain transition intensities in [12pt]{minimal} $ { {{A}}}(u)$ A ^ ( u ) and then explore the corresponding hypothetical transition probabilities . Such changes in transition intensities, creating a new transition intensity matrix which can be denoted [12pt]{minimal} $ { {{A}}}(u)$ A ~ ( u ) , may represent interventions. The hypothetical transition probabilities, which we can denote [12pt]{minimal} $ { {{P}}}(s,t)$ P ~ ( s , t ) , may then represent counterfactual outcomes. Confidence intervals for such hypothetical transition probabilities can be found through the distribution of the cumulative intensities after manipulation. For situations without covariates and for the additive hazards model this will follow by the arguments in Aalen, Borgan and Gjessing (, p. 123–126 and 181–190). For the Cox model it will follow by the functional delta method in Andersen, Borgan, Gill & Keiding (, p. 512–515). For more on these types of analyses with respect to causal inference, and especially the connection to G-computation, see Keiding et al. and Aalen, Borgan and Gjessing (, p. 382). The important causal assumption for this approach to be reasonable is that when intervening on a set of transition intensities, the remaining transition intensities stay unchanged. This is equivalent to the modularity assumption and definition of a structural causal model in the Pearl framework of causal inference . See Aalen et al. and Røysland for more on modularity in the light of intensity processes. However, even when it is unreasonable that such an assumption is fully met, it has been argued that this kind of inference in multi-state models still can give valuable insights (, p. 250). In this paper we will follow the ideas from Keiding et al. for our multi-state model for sickness absence and work in Fig. , and define interventions through manipulating transition rates within given sets of covariate values, where such interventions would be realistic. One example of an intervention would be to increase the use of partial sick leave compared to full sick leave, which would correspond to modifying the intensities into the partial sick leave and sick leave states. For the modularity assumption to be met in this case, it means that the additional individuals counterfactually put on partial sick leave instead of full sick leave, should behave identical to those individuals who were observed on partial sick leave in the original data. As those on partial sick leave generally are in a better health state than those on full sick leave, this is not a reasonable assumption. However, it is reasonable within similar stratums of covariate levels, which we will study in later in this paper. Satisfying the condition of modularity in this manner, also will imply that the assumptions of positivity, exchangeability and consistency are met. Another approach from the causal inference literature is inverse probability of treatment (or propensity score) weighting . The treatment or exposure of interest can be represented either as states in the multi-state model or through additional covariates. One could for example weight by the inverse probability of being in a given state at baseline, before estimating the transition intensities of the model in Fig. . This would correspond to modelling a counterfactual scenario where there is a copy of each individual in every possible initial state. The sufficient conditions for this approach to be valid is again the causal assumptions of positivity, exchangeability and consistency. Positivity here means that there should be a non-zero probability of receiving all possible exposures for all covariate values in the population. Also, the model for the exposure, which is the foundation for the weights, must be well specified. See for example for a further discussion on these assumptions. Say that we would like to compare the effect of being put on sick leave versus partial sick leave at baseline (when being discharged from the rehabilitation center). Let us for now only consider those starting in either of these two states. Whether an individual is put on full or partial sick leave at baseline is hardly randomized. We could however model the counterfactual situation where everyone, regardless of their covariate information, were put on full sick leave at baseline and an identical copy of each individual were placed on part time sick leave. This can be achieved by applying the weights [12pt]{minimal} $$ w_{k} = = s_{k}|Z_{k} = z_{k})}, $$ w k = 1 P ( S k = s k | Z k = z k ) , where S k is the initial state and Z k is all the relevant covariate information explaining the initial state for individual k . The probabilities of being in either of the two states at baseline can be estimated using ordinary logistic regression. The uncertainty of the estimates from the resulting weighted multi-state analysis can easily be calculated using for example the coxph function in R with robust standard errors . Another casual contrast of interest would be to compare the scenario where everyone got a cooperation agreement on a more inclusive working life with a scenario where no-one had such an agreement. This would correspond to modelling a situation where such agreements were randomized. This could be modelled by weighting every individual in the original data with the inverse probability of having a cooperation agreement on a more including working life given covariates, by applying the weights [12pt]{minimal} $$ w_{k} = = e_{k}|Z_{k} = z_{k})}, $$ w k = 1 P ( E k = e k | Z k = z k ) , where E k is an indicator variable that is 1 if an agreement is present and 0 otherwise. The probabilities can again be estimated using logistic regression. Assuming positivity for the first type of intervention means that there should be a probability greater than zero for starting in either of the two states of sick leave or partial sick leave at baseline, regardless of any observed covariate history. This is testable, and the covariates in Table are well balanced over the two groups. The biggest difference lies in the distribution of the working ability score, but even in the partial sick leave group 5 % of the individuals has a low ability score (the lowest health score). As for exchangeability it is a question of whether the included covariates sufficiently explain the differences between those on full and partial sick leave at baseline. The covariates include demographic, socioeconomic, work and health variables, which should be the central parameters. However, to what degree they are sufficiently covered is untestable. The health variable should ideally had been collected at baseline, and not at the first measurement after entering the rehab, but one could hope that in combination with type of work and diagnosis group, it will still be sufficient. An example of a variable that was considered, but not included, is the center that the patients attended. Adding this information, which involves adding 7 new dummy variables, seemed to have little impact. We therefore assume that center specific differences between patients are covered sufficiently through the other covariates, and especially working ability score and diagnosis group. For the cooperation agreement intervention, this is not administered at an individual level, and thus the assumptions are even easier to assess. There are no covariate combinations that exclude such agreements and the most important confounder will be type of work. Both interventions can also be assumed well-specified. A third approach, which corresponds to G-computation (or standardization) of the parameter from the inverse probability weighting, is to estimate the transition intensities for individual k conditioned on all relevant covariate information Z k using a Cox proportional hazards or an Aalen additive hazards model, and then predict the state transition probabilities given covariates Z , P hj ( s , t ∣ Z ), for every individual given a specific intervention. As for the inverse probability weighting approach, the intervention could be defined both through setting a specific initial state or a covariate to a specific value. The main causal assumptions are again positivity, exchangeability and consistency, together with the assumption of no model misspecification. However, the model which needs to be correctly specified is now the model for the outcome, and not a model for the exposure as for the inverse probability approach. See for example for a discussion on the causal assumptions of G-computation. For a general discussion on the use of inverse probability weighting and G-computation, and the connection to standardisation, see . If, again, we would like to compare the effect of being put on sick leave versus partial sick leave at baseline, the intervention would correspond to setting their initial state to h =2 and h =3, and compare all individual predictions for both values. The population average effect can then be estimated through [12pt]{minimal} $$ _{k} _{3,j}(0,t Z_{k}) - _{k} _{2,j}(0,t Z_{k}), $$ 1 n ∑ k P ^ 3 , j ( 0 , t ∣ Z k ) − 1 n ∑ k P ^ 2 , j ( 0 , t ∣ Z k ) , where n is the number of individuals in the study. Confidence intervals can be found using standard bootstrap techniques. Correspondingly, if we consider an intervention such as the cooperation agreement on a more inclusive working life, represented by a binary covariate E k , the population average effect of such an intervention can be estimate by (4) [12pt]{minimal} $$ _{k} _{i,j}(0,t Z_{k}^{E_{k}=1}) - _{k} _{i,j}(0,t Z_{k}^{E_{k}=0}), $$ 1 n ∑ k P ^ i , j 0 , t ∣ Z k E k = 1 − 1 n ∑ k P ^ i , j 0 , t ∣ Z k E k = 0 , for given initial states i . As these interventions are the same as the ones in question for the inverse probability approach, the causal assumptions need are also identical. See the discussion of these assumptions in the previous sub section. Unadjusted analysis Unadjusted cumulative intensities for the 15 transitions in the multi-state model in Fig. estimated using the Nelson-Aalen estimator are found in Fig. . We see how the magnitude of the estimated transition intensities varies between states, and that transitions from sick leave to work has the highest intensity. Note that estimated intensities will correspond to the slopes of the cumulative estimates in this figure. The estimated time-varying transition probabilities, found by Eq. , give rise to the stacked probability plots in Fig. , given the four possible initial states (work, sick leave, partial sick leave and work assessment allowance). For example, we see that an individual who is on sick leave at time 0, has an unadjusted probability of approximately 0.50 of having returned to work after three years. The unadjusted probability of being disabled after the same period is approximately 0.10. Overall state occupation probabilities calculated according to are shown in Fig. . We see that, for example, overall there is a rapid increase in work after being discharged from the rehabilitation center, from just below 20 % to just below 50 % after the first year. The general tendencies in this figure are similar to the ones in the paper by Øyeflaten et al. , who do an unadjusted analysis on a subset of the patients included in the current analysis. Note that in the remainder of this paper we focus on state transition probability plots, but that similar plots of the state occupation probabilities also can be derived. Covariate adjusted analysis and individual predictions Adjusting for the covariates age, gender, marital status, higher education, type of work, income, cooperation agreement on a more inclusive working life, work ability score and baseline diagnosis when estimating the transition hazards, allows for covariate specific predictions of the state transition probabilities. Figure shows two examples of such predictions, for a married female aged 30 in an educational job, with an agreement on inclusive working life, income above NOK 300 000, higher education, working ability score 4 and mental diagnosis, and a single male aged 60 in a manual job, no agreement on inclusive working life, income below NOK 300 000, no higher education, work ability score 4 and musculoskeletal diagnosis. Note that when fitting the models, from the original covariates described in Table , those who did not answer the questions on marital status, higher education or having an inclusive working life agreement were put in the “no” category. We see that the estimated state transition probabilities for the two sets of covariates clearly differ with respect to work. The probability of returning to work within the follow-up time is almost 0.80 for females with the given example of covariates, while only about 0.10–0.15 for males in the second example. Note that the stacked probability plots in Figs. and do not include confidence intervals. In Fig. we explore these by showing the probability of having returned to work from state 4 (work assessment allowance) at any time, with corresponding confidence intervals, for the two scenarios in Fig. . We see that the probability of returning to work after being on work assessment allowance is very different for individuals with the two different sets of covariates, also when accounting for the uncertainty of the estimates. The results using a Cox proportional hazards model were also compared with an Aalen additive hazards model for modelling the transition intensities in our multi-state model. Even in simple additive models where constant hazards were assumed, we saw a good agreement between additive and proportional hazards models. See the next sub section for a further comparison between these two types of hazard models. The effect of hypothetical interventions Let us now consider results from the three proposed methods for doing causal inference in our multi-state model. For assessing hypothetical interventions on the use of full and partial sick leave benefits in the multi-state model in Fig. , let us first look at a scenario where we artificially manipulate the transition intensities that go into the partial sick leave and sick leave states. Figure show the state transition probabilities for an individual starting in the work state at baseline. The left panel show the estimated probabilities given the original multi-state model, while the right panel show a counterfactual scenario where all transitions into full sick leave are blocked and routed into partial sick leave. This manipulation of the multi-state model corresponds to removing the possibility of full time sick leave, and instead putting individuals on partial sick leave. For such a manipulation to be reasonable, this should be done within a set of covariate characteristics where this intervention is realistic. The figure shows results for married males aged 45 in an educational job, income below NOK 300 000, no higher education, working capacity score 1 and a musculoskeletal diagnosis. From Fig. , we see that the state transition probabilities are similar for the two scenarios, but that individuals tend to quit full time work more frequently when full time sick leave is not available. The use of part time sick leave benefits is of course higher, but the use of work assessment allowance and disability pension is actually lower. Let us then consider the inverse probability weighting approach and first the hypothetical intervention of placing all individuals on either full or part time sick leave at baseline. To assess such an intervention we focus only on the individuals in partial or full sick leave at baseline and give them a weight corresponding to the inverse probability of starting in their initial state. Then we estimate all transition intensities of the multi-state model in Fig. and calculate their state transition probabilities as functions of time. This will correspond to comparing partial and full sick leave as if it was randomized at baseline. State transition probabilities for these two scenarios are shown in Fig. . Note that when intervening on initial states in a model that is Markov, like we do here, the differences between the two interventions will be smaller and smaller with time. When comparing partial and full sick leave, the difference is mostly visible during the first year. To give a more detailed picture of this difference, the time axis in Fig. has been restricted to go from 0 to 365 days. Probabilities of starting in a given initial state were calculated using logistic regression, adjusting for the covariates in Table . We see that there is a tendency that partial sick leave yields a faster return to work than full time sick leave, and to a certain degree replace the use of work assessment allowance, but the differences are small. Another intervention in question was the cooperation agreement on a more inclusive working life. The effect of this agreement could be assessed by weighting with the inverse probability of having an agreement and then look at the transition probabilities for the weighted subsets of the original data for those without and with an agreement. This corresponds to modelling two counterfactual scenarios; one where no-one has such an agreement and another where everyone has one. The results from such a comparison is shown in Fig. . Probabilities of having an agreement were calculated using logistic regression, adjusting for the covariates in Table . We see that there is a small but positive effect of having an agreement on a more inclusive working life with respect to having a higher probability of returning to work. Finally, if we consider the G-computation approach, we can again estimate the effect of having an agreement on a more inclusive working life by estimating state transition probabilities for every individual when the indicator variable for such an agreement first is fixed to 0 and then 1, and look at average predictions for all individuals. The average predictions can be seen in Fig. , from using a Cox model in the upper panels and from an Aalen additive model in the lower panels. The two hazard models give very similar results. The smooth curves for the additive models is due to the assumption of constant hazard rates, which simplifies the model fitting. Left panels show overall state transition probabilities without an agreement and the right panels show overall transition probabilities with an agreement. We again see a small but positive effect of having such an agreement. We also see that the results are very similar to the results when using the inverse probability weighting approach in Fig. . As described earlier, a similar analysis can be done with regards to starting in partial or full time sick leave at baseline. Again, results (not shown) are similar to the ones estimated using inverse probability weights (shown in Fig. ). An alternative way to illustrate the effect of the inclusive working life agreement is to plot the difference in state transition probabilities, for instance of returning to the work state from work assessment allowance. Ninety-five percent confidence intervals for such effects can be found using bootstrap techniques. Note however that such a bootstrap can be computationally heavy, for example in the G-computation approach when averaging over all individual predictions. A possible shortcut is however to make one prediction for average covariate levels together with the manipulated covariate. Formally, this can be justified for additive hazards models, but in our applications we found that it also gave a good approximation with Cox models. Results from such an analysis can be found in Fig. , using Cox proportional hazards models to estimate the causal effect in Eq. , and the latter bootstrap approach for confidence intervals. We see that, after the first year, there is a rather constant positive effect of having a cooperation agreement on a more inclusive working life, with about 5 percent points higher probability of entering the work state. However, the uncertainty is relatively high, with a 95 % bootstrap confidence interval ranging from about 1 percent to 10 percent. Unadjusted cumulative intensities for the 15 transitions in the multi-state model in Fig. estimated using the Nelson-Aalen estimator are found in Fig. . We see how the magnitude of the estimated transition intensities varies between states, and that transitions from sick leave to work has the highest intensity. Note that estimated intensities will correspond to the slopes of the cumulative estimates in this figure. The estimated time-varying transition probabilities, found by Eq. , give rise to the stacked probability plots in Fig. , given the four possible initial states (work, sick leave, partial sick leave and work assessment allowance). For example, we see that an individual who is on sick leave at time 0, has an unadjusted probability of approximately 0.50 of having returned to work after three years. The unadjusted probability of being disabled after the same period is approximately 0.10. Overall state occupation probabilities calculated according to are shown in Fig. . We see that, for example, overall there is a rapid increase in work after being discharged from the rehabilitation center, from just below 20 % to just below 50 % after the first year. The general tendencies in this figure are similar to the ones in the paper by Øyeflaten et al. , who do an unadjusted analysis on a subset of the patients included in the current analysis. Note that in the remainder of this paper we focus on state transition probability plots, but that similar plots of the state occupation probabilities also can be derived. Adjusting for the covariates age, gender, marital status, higher education, type of work, income, cooperation agreement on a more inclusive working life, work ability score and baseline diagnosis when estimating the transition hazards, allows for covariate specific predictions of the state transition probabilities. Figure shows two examples of such predictions, for a married female aged 30 in an educational job, with an agreement on inclusive working life, income above NOK 300 000, higher education, working ability score 4 and mental diagnosis, and a single male aged 60 in a manual job, no agreement on inclusive working life, income below NOK 300 000, no higher education, work ability score 4 and musculoskeletal diagnosis. Note that when fitting the models, from the original covariates described in Table , those who did not answer the questions on marital status, higher education or having an inclusive working life agreement were put in the “no” category. We see that the estimated state transition probabilities for the two sets of covariates clearly differ with respect to work. The probability of returning to work within the follow-up time is almost 0.80 for females with the given example of covariates, while only about 0.10–0.15 for males in the second example. Note that the stacked probability plots in Figs. and do not include confidence intervals. In Fig. we explore these by showing the probability of having returned to work from state 4 (work assessment allowance) at any time, with corresponding confidence intervals, for the two scenarios in Fig. . We see that the probability of returning to work after being on work assessment allowance is very different for individuals with the two different sets of covariates, also when accounting for the uncertainty of the estimates. The results using a Cox proportional hazards model were also compared with an Aalen additive hazards model for modelling the transition intensities in our multi-state model. Even in simple additive models where constant hazards were assumed, we saw a good agreement between additive and proportional hazards models. See the next sub section for a further comparison between these two types of hazard models. Let us now consider results from the three proposed methods for doing causal inference in our multi-state model. For assessing hypothetical interventions on the use of full and partial sick leave benefits in the multi-state model in Fig. , let us first look at a scenario where we artificially manipulate the transition intensities that go into the partial sick leave and sick leave states. Figure show the state transition probabilities for an individual starting in the work state at baseline. The left panel show the estimated probabilities given the original multi-state model, while the right panel show a counterfactual scenario where all transitions into full sick leave are blocked and routed into partial sick leave. This manipulation of the multi-state model corresponds to removing the possibility of full time sick leave, and instead putting individuals on partial sick leave. For such a manipulation to be reasonable, this should be done within a set of covariate characteristics where this intervention is realistic. The figure shows results for married males aged 45 in an educational job, income below NOK 300 000, no higher education, working capacity score 1 and a musculoskeletal diagnosis. From Fig. , we see that the state transition probabilities are similar for the two scenarios, but that individuals tend to quit full time work more frequently when full time sick leave is not available. The use of part time sick leave benefits is of course higher, but the use of work assessment allowance and disability pension is actually lower. Let us then consider the inverse probability weighting approach and first the hypothetical intervention of placing all individuals on either full or part time sick leave at baseline. To assess such an intervention we focus only on the individuals in partial or full sick leave at baseline and give them a weight corresponding to the inverse probability of starting in their initial state. Then we estimate all transition intensities of the multi-state model in Fig. and calculate their state transition probabilities as functions of time. This will correspond to comparing partial and full sick leave as if it was randomized at baseline. State transition probabilities for these two scenarios are shown in Fig. . Note that when intervening on initial states in a model that is Markov, like we do here, the differences between the two interventions will be smaller and smaller with time. When comparing partial and full sick leave, the difference is mostly visible during the first year. To give a more detailed picture of this difference, the time axis in Fig. has been restricted to go from 0 to 365 days. Probabilities of starting in a given initial state were calculated using logistic regression, adjusting for the covariates in Table . We see that there is a tendency that partial sick leave yields a faster return to work than full time sick leave, and to a certain degree replace the use of work assessment allowance, but the differences are small. Another intervention in question was the cooperation agreement on a more inclusive working life. The effect of this agreement could be assessed by weighting with the inverse probability of having an agreement and then look at the transition probabilities for the weighted subsets of the original data for those without and with an agreement. This corresponds to modelling two counterfactual scenarios; one where no-one has such an agreement and another where everyone has one. The results from such a comparison is shown in Fig. . Probabilities of having an agreement were calculated using logistic regression, adjusting for the covariates in Table . We see that there is a small but positive effect of having an agreement on a more inclusive working life with respect to having a higher probability of returning to work. Finally, if we consider the G-computation approach, we can again estimate the effect of having an agreement on a more inclusive working life by estimating state transition probabilities for every individual when the indicator variable for such an agreement first is fixed to 0 and then 1, and look at average predictions for all individuals. The average predictions can be seen in Fig. , from using a Cox model in the upper panels and from an Aalen additive model in the lower panels. The two hazard models give very similar results. The smooth curves for the additive models is due to the assumption of constant hazard rates, which simplifies the model fitting. Left panels show overall state transition probabilities without an agreement and the right panels show overall transition probabilities with an agreement. We again see a small but positive effect of having such an agreement. We also see that the results are very similar to the results when using the inverse probability weighting approach in Fig. . As described earlier, a similar analysis can be done with regards to starting in partial or full time sick leave at baseline. Again, results (not shown) are similar to the ones estimated using inverse probability weights (shown in Fig. ). An alternative way to illustrate the effect of the inclusive working life agreement is to plot the difference in state transition probabilities, for instance of returning to the work state from work assessment allowance. Ninety-five percent confidence intervals for such effects can be found using bootstrap techniques. Note however that such a bootstrap can be computationally heavy, for example in the G-computation approach when averaging over all individual predictions. A possible shortcut is however to make one prediction for average covariate levels together with the manipulated covariate. Formally, this can be justified for additive hazards models, but in our applications we found that it also gave a good approximation with Cox models. Results from such an analysis can be found in Fig. , using Cox proportional hazards models to estimate the causal effect in Eq. , and the latter bootstrap approach for confidence intervals. We see that, after the first year, there is a rather constant positive effect of having a cooperation agreement on a more inclusive working life, with about 5 percent points higher probability of entering the work state. However, the uncertainty is relatively high, with a 95 % bootstrap confidence interval ranging from about 1 percent to 10 percent. One of the important goals of sickness absence research is to find effective interventions for controlling it. Registry data on sickness benefits is a primary source for making such inference, and multi-state models have proved to be a very successful framework for modelling the transitions between different benefits and work in such data. Coupling registry data with detailed information about cohort participants, gives further insights about underlying reasons for sickness absence and can predict patient specific probabilities of future sickness absence, disability and returning to work. Combining these methods with standard methods from causal inference is a first attempt to then answer questions of the effect of interventions. In this paper we have considered examples of two such possible interventions; namely the use of partial sick leave and cooperation agreements for a more inclusive working life. Covariate specific predictions show great differences in the probabilities for sick leave, disability and work for patients with assumed high risk and low risk covariate characteristics. Overall, we find small effects of partial sick leave compared to full sick leave on state transition probabilities. Note however that in terms of expenses, partial sick leave benefits are less costly than giving full sick leave benefits, and thus, no difference in outcome between the two would indicate that partial sick leave should be preferred when possible. For cooperation agreements on a more including working life we find more visible, but still rather small, effects. Again, in terms of overall expenses, the effects of having such agreements must be considered against the cost of implementing them. When it comes to graphically representing the outcome in multi-state models there are many possibilities, and we have only looked at some of them. Stacked probability plots are illustrative, of either state transition or state occupation probabilities, while non-stacked plots make it easier to include confidence intervals. When assessing the effect of interventions one can plot the difference in these probabilities, as we have done, or alternatively the ratio between state transition or occupation probabilities. Another possible outcome measure could be to study the area under each curve, which will correspond to the expected time spent in each state during follow-up. Methodologically, the graphical features of the multi-state model framework makes it very suitable for thinking in terms of causal inference. Both in terms of the intuitiveness of defining interventions in terms of manipulating transition intensities, but also in terms of interpreting the outcomes of interventions using state transition and occupation probabilities. We also find that standard approaches from the causal inference literature, such as inverse probability weighting and G-computation, can help identify causal parameters easily interpreted also in a multi-state model setting. The methods applied in this paper are kept rather simple, partly for illustrative purposes, but can also easily be extended to estimate effects of time-varying exposures or interventions and to compare treatment regimes. One should however expect that this makes both standard model assumptions and causal assumptions harder to meet. For the modelling of transition intensities it is reassuring that the Cox proportional hazards models and Aalen additive hazards models gave similar results. The two models have different advantages in the setting of this paper. The Cox model is easier to implement using existing software, while the additive model needs more model fitting assessment, for example in deciding how to smooth the estimated cumulative hazards to get well behaved hazard estimates. In this paper we could assume constant intensities for the additive hazards models which simplifies the model fitting. When doing individual predictions, the additive models are not ideal, as they can give probability estimates below 0 or above 1 for uncommon combinations of covariates. A major benefit of the additive hazards model however, is that because of their additive structure, predicting with average covariate values is a shortcut to the individual predictions used in the G-computation approach. Apart from the standard model assessments when fitting separate hazard models for each transition, the most important statistical assumption to consider is of course the Markov assumption for the overall multi-state model, which was discussed in the section. As for causal assumptions, it is clear that with the complexity of multi-state models, causal interpretation should not be made naively. To interpret all the separate transition intensity models and the overall multi-state model causally is challenging. To what degree such causal assumptions are needed will however depend on the approach used to define the intervention of interest. When intervening on transition intensities, the structural assumption of the full model will be key, while when intervening on treatment indicator variables, such as in the approach referred to as G-computation, the causal interpretation of the coefficient for this variable, in each separate hazard model, will be of particular importance. The goal of this paper, in terms of causal inference, is to illustrate how standard approaches can be used in a multi-state model setting to answer questions about the effect of interventions. When it comes to formal arguments for the validity of these approaches there is room for more work, especially on the sensitivity of the Markov assumption and how deviation will affect the validity of the causal assumptions. Overall, we believe that there are many benefits from thinking in terms of causal inference for multi-state models, as research questions often boil down to questions on the effect of interventions. It is also worth noticing that many of these approaches have been used at some level in multi-state models also historically. In particular, this goes for manipulating transition intensities and fixing covariate values, which in this paper was put in a G-computation context. However, few formal connections have yet been made to the causal inference field. Detailed covariate information is important for explaining transitions between different states of sickness absence and work in a multi-state model, also for patient specific cohorts. Methods from the causal inference literature can provide the needed tools for going from covariate specific estimates to population average effects in in such models, and thus yield new insights when assessing hypothetical interventions from complex observational data. |
Impact of surgical site infection on short- and long-term outcomes of robot-assisted rectal cancer surgery: a two-center retrospective study | 09c178f3-6775-48fc-b1fc-7be64b848074 | 11903511 | Robotic Surgical Procedures[mh] | Colorectal cancer ranks as the third most common malignancy worldwide and is the second leading cause of cancer-related mortality. Despite significant advancements in early detection and therapeutic strategies, the persistently high morbidity and mortality associated with colorectal cancer continue to represent a substantial global public health challenge . The current standard of care for treating locally advanced rectal cancer involves a multimodal approach, which includes preoperative neoadjuvant radiotherapy followed by total mesorectal excision (TME) . Robot-assisted rectal cancer surgery has emerged as a means to overcome several technical limitations inherent to conventional laparoscopic procedures, thereby enhancing the precision and efficacy of radical resections . As a result, robotic surgery has become a focal point of ongoing research and innovation. Surgical site infection (SSI) is a frequent complication following rectal cancer surgery, with serious implications for postoperative recovery and overall prognosis . Due to the inherently high risk of contamination in rectal surgery, SSI are particularly prevalent in this context. The development of SSI not only extends hospitalization and escalates healthcare costs, but it also heightens the risk of severe complications, such as sepsis and multiorgan failure, which can compromise long-term survival . Therefore, the identification and management of SSI risk factors in robot-assisted rectal resection are crucial for optimizing patient outcomes and improving long-term prognosis. The impact of abdominal infections in robot-assisted rectal cancer surgery remains inadequately understood. Although robotic surgery enables surgeons to perform minimally invasive procedures with enhanced visualization and more intuitive, precise control of surgical instruments , its effectiveness in reducing the incidence of SSI and improving prognosis has not been conclusively demonstrated. This study aims to evaluate the incidence of abdominal infections in robot-assisted rectal cancer surgery and assess their impact on short-term outcomes and long-term prognosis. By providing clinicians with evidence-based data, this study seeks to promote the use of robotic-assisted surgery in the treatment of rectal cancer, ultimately improving patient prognosis and quality of life.
Study design and population We retrospectively analyzed data from 360 patients with pathological rectal cancer (RC) who underwent robotic-assisted radical rectal cancer surgery at Fujian Medical University Union Hospital and Longyan First Affiliated Hospital of Fujian Medical University between 2017 and 2024. In this study, 295 patients received treatment at the Union Hospital of Fujian Medical University, while 68 patients were treated at the Longyan First Hospital of Fujian Medical University (Fig. ). The patient characteristics, pathological and surgical manifestations, and postoperative histological findings documented in our medical records and database were uniformly consistent with adenocarcinoma. This study followed the recommended items in the STROBE statement and was designed and reported according to the standards for reporting case–control studies. The main objective of this study was to evaluate short-term postoperative complications, so only all complications occurring within 30 days after surgery were counted, including surgical site infection anastomotic leaks, pulmonary infections, and small bowel obstruction. Late anastomotic leakage, anastomotic stenosis, or incisional hernia was not observed in this study. Surgical site infection (SSI) is categorized into two types: wound SSI, which can be either superficial or deep, and organ/space SSI. Wound SSIs refer to infections occurring at the incision site. Organ/space SSI, on the other hand, involves intra-abdominal or pelvic abscesses, which are accumulations of pus within the abdomen or pelvis. The diagnosis of such abscesses is generally established through advanced imaging modalities, including ultrasonography and computed tomography (CT). The presence of clinical anastomotic leakage may be concomitant with these infections, although it is not a requisite for diagnosis . According to SSI, the patients were divided into the SSI group and the non-SSI group. The following information was used for the analysis of this study: baseline information, tumor location, pathological information, surgical details, postoperative hospitalization, and follow-up information. The World Health Organization (WHO) BMI classification for Asian populations was used in this study. Low body weight (BMI < 18.5 kg/m 2 ) is defined as a body mass index (BMI) below the normal range, reflecting possible malnutrition or underweight. Normal weight (BMI 18.5–24.9 kg/m 2 ) is considered a healthy weight range. Overweight/obesity (BMI ≥ 25.0 kg/m 2 ) includes overweight and obesity. Postoperative anastomotic leakage , an abnormal connection at the anastomotic site after surgery, results in the escape of intestinal contents or inflammatory fluids into the abdominal cavity or out through a drain. Patients present with symptoms such as fever, abdominal pain, bloating, or abnormal bowel function. Turbid fluid or fluid-containing feces is discharged from the wound or drain. Elevated peripheral blood leukocytes or significant elevation of C-reactive protein, CT examination is the gold standard for the diagnosis of anastomotic leakage, showing fluid or gas collection around the anastomosis and enhanced images of the suspected leakage site. Inclusion and exclusion criteria Inclusion criteria: (1) patients with a confirmed diagnosis of rectal cancer based on postoperative pathological examination, treated between January 2017 and December 2024 at the two study centers; (2) all surgical procedures were conducted by a consistent and specialized surgical team; (3) patients with comprehensive clinicopathological data and complete follow-up records; (4) patients aged 18 years or older. Exclusion criteria: (1) patients identified with distant metastasis of the tumor during preoperative assessment or intraoperative exploration, precluding radical surgical intervention; (2) patients diagnosed with two or more concurrent malignant tumors during either the preoperative or postoperative follow-up period; (3) patients who underwent emergency surgical procedures due to complications such as tumor perforation, obstruction, or bleeding; (4) patients with a diagnosis of familial adenomatous polyposis. Surgery and perioperative management This study was conducted across two centers, involving a comprehensive preoperative evaluation for all patients, which included imaging studies, laboratory tests, and detailed medical history assessments, to determine their suitability for undergoing robotic-assisted rectal cancer surgery. All robotic-assisted rectal cancer surgeries were performed by experienced surgical teams from Fujian Medical University Union Hospital and Longyan First Affiliated Hospital of Fujian Medical University. Each attending surgeon on the surgical team had received specialized training in robotic surgery and possessed extensive experience in performing robotic-assisted procedures. The lead surgeon operated the Da Vinci Surgical System throughout the surgeries, ensuring precise and effective surgical interventions. The main indications for preoperative neoadjuvant therapy for patients with locally advanced rectal cancer (LARC) that is resectable include patients with clinical stage cT3-4 or with regional lymph node metastasis and no distant metastasis. Especially for patients with low rectal cancer, if the tumor location is close to the anal sphincter and the patient has a strong willingness to preserve the anus, neoadjuvant therapy can be chosen after full communication with the patient, and the surgical plan can be decided according to the evaluation of the efficacy after surgery . Patients of advanced age or with comorbid metabolic disease should have the LCA preserved whenever possible, as the colonic border arterial arch in these patients may have a restricted blood supply due to atherosclerosis or microangiopathy, and preservation of the LCA improves anastomotic perfusion proximally, thereby reducing the risk of anastomotic leakage. In patients with rectal cancer after neoadjuvant therapy, the decision to preserve the LCA is based on the patient’s underlying condition. In patients with familial adenomatous polyposis or Lynch syndrome, preservation of the LCA reserves vascularity for possible future reoperation . Patients with descending colonic rotation have an anatomically abnormal vascular arch, and not preserving the LCA may result in extensive intestinal ischemia. Failure to preserve LCA and positive IMA root lymph nodes are independent risk factors for distant recurrence after rectal cancer surgery . To ensure complete lymph node dissection, the LCA is preserved in such patients. When mesenteric tension is too high and free bowel is insufficient to complete a low-tension anastomosis, we consider dissecting the LCA to reduce anastomotic tension. Preoperative bowel preparation is performed in strict accordance with international guidelines and our center’s standardized procedures. Implementation details of bowel preparation: mechanical bowel preparation : the day before surgery, patients take a polyethylene glycol electrolyte solution for bowel cleansing, with the dose adjusted to the patient’s weight. Bowel cleansing is usually recommended to be completed within 12–24 h prior to surgery to ensure that intraoperative bowel contents are minimized. The principles of antibiotic administration in this study were as follows: the first dose of broad-spectrum antibiotics was given intravenously within 30 to 60 min after the induction of anesthesia to ensure that blood levels reached effective levels at the start of surgery . In dual-center patients requiring surgery, the following should be implemented: if the duration of surgery is more than 3 h or the intraoperative bleeding is more than 1500 ml, the antibiotic dose should be increased according to the duration of surgery and the amount of bleeding. Postoperative antibiotics should not be given for more than 24 h. In patients with severe intraoperative contamination or complex surgery, this may be extended to 48 h. The choice of antibiotics is based on the coverage of the target strain and the individual patient. The indications for stoma creation have been formulated according to international guidelines and clinical practice , taking into account the specific situation of the patient, and mainly include patients with high risk of anastomotic leakage, such as patients with very low tumor location, patients who received neoadjuvant therapy prior to the operation, patients with high intraoperative anastomotic tension, or patients with poor blood supply. Patients are with poor systemic conditions, such as advanced age, malnutrition, or severe comorbidities. Patients are with poor intraoperative bowel conditions, high risk of contamination of bowel contents, or inadequate intraoperative bowel cleansing. The placement of an abdominal drain is determined on a case-by-case basis, mainly in patients with high-risk anastomotic leaks, cases with significant intraoperative contamination, or heavy bleeding. Patients require real-time monitoring of the nature and amount of intra-abdominal fluid. Anal drains are mainly used in low anastomotic surgery, especially in anal preservation surgery, to reduce localized anastomotic fluid collection and to reduce pressure. Abdominal drains are usually removed within 48–72 h postoperatively with clear drainage or significantly reduced volume and no signs of infection. Anal drains are usually removed within 2–3 days after surgery, but the exact time depends on the patient’s recovery. The standard protocol for regular postoperative follow-up for patients in this study is as follows: patients were reviewed every 3–6 months during the first 2 years post-surgery, every 6 months during the 3rd and 4th years, and subsequently every 6–12 months from the 5th year onwards. Laboratory tests conducted during follow-up include routine blood tests, routine biochemical tests, and serum tumor markers, among others. Imaging examinations performed during follow-up consist of ultrasound, computed tomography (CT), positron emission tomography (PET) (specifically for the detection of recurrent lesions), and gastroenteroscopy. The survival follow-up methodology adhered to the guidelines of the American Joint Committee on Cancer (AJCC) . Patients were followed up through multiple methods, including outpatient visits, the electronic medical record system, and telephone callbacks, to ensure the completeness and accuracy of the data. Statistical analysis Continuous variables were expressed as mean ± standard deviation, and comparisons of categorical variables between the two groups were performed using the chi-square test (chi-square test). Survival curves for overall survival (OS) and disease-free survival (DFS) were generated by the Kaplan–Meier method and compared by the log-rank test. OS was defined as the time interval from the date of surgical treatment to the date of death or the date of the last follow-up. DFS was defined as the time interval from the date of surgical intervention to the date of tumor recurrence, metastasis, death, or the date of the last follow-up. Multivariate survival analyses were performed using Cox proportional hazards regression models to identify independent prognostic factors for OS. Logistic univariate regression analysis was used to identify potential risk factors for SSI. Variables with a P -value of less than 0.05 in the univariate analysis were subsequently included in a multivariate logistic regression model to identify independent predictors of SSI. For risk stratification, chi-square tests were used to assess the association between the number of risk factors identified in the multivariate analyses (0–1, 2, or 3) and the risk of SSI. In addition, relative risk (RR) and corresponding 95% confidence intervals (CI) were calculated using the rest of the population as the reference group (RR = SSI risk in patients with N risk factors / SSI risk in the rest of the population). Statistical significance was defined as P < 0.05. All statistical analyses were performed using R software (version 4.3.1).
We retrospectively analyzed data from 360 patients with pathological rectal cancer (RC) who underwent robotic-assisted radical rectal cancer surgery at Fujian Medical University Union Hospital and Longyan First Affiliated Hospital of Fujian Medical University between 2017 and 2024. In this study, 295 patients received treatment at the Union Hospital of Fujian Medical University, while 68 patients were treated at the Longyan First Hospital of Fujian Medical University (Fig. ). The patient characteristics, pathological and surgical manifestations, and postoperative histological findings documented in our medical records and database were uniformly consistent with adenocarcinoma. This study followed the recommended items in the STROBE statement and was designed and reported according to the standards for reporting case–control studies. The main objective of this study was to evaluate short-term postoperative complications, so only all complications occurring within 30 days after surgery were counted, including surgical site infection anastomotic leaks, pulmonary infections, and small bowel obstruction. Late anastomotic leakage, anastomotic stenosis, or incisional hernia was not observed in this study. Surgical site infection (SSI) is categorized into two types: wound SSI, which can be either superficial or deep, and organ/space SSI. Wound SSIs refer to infections occurring at the incision site. Organ/space SSI, on the other hand, involves intra-abdominal or pelvic abscesses, which are accumulations of pus within the abdomen or pelvis. The diagnosis of such abscesses is generally established through advanced imaging modalities, including ultrasonography and computed tomography (CT). The presence of clinical anastomotic leakage may be concomitant with these infections, although it is not a requisite for diagnosis . According to SSI, the patients were divided into the SSI group and the non-SSI group. The following information was used for the analysis of this study: baseline information, tumor location, pathological information, surgical details, postoperative hospitalization, and follow-up information. The World Health Organization (WHO) BMI classification for Asian populations was used in this study. Low body weight (BMI < 18.5 kg/m 2 ) is defined as a body mass index (BMI) below the normal range, reflecting possible malnutrition or underweight. Normal weight (BMI 18.5–24.9 kg/m 2 ) is considered a healthy weight range. Overweight/obesity (BMI ≥ 25.0 kg/m 2 ) includes overweight and obesity. Postoperative anastomotic leakage , an abnormal connection at the anastomotic site after surgery, results in the escape of intestinal contents or inflammatory fluids into the abdominal cavity or out through a drain. Patients present with symptoms such as fever, abdominal pain, bloating, or abnormal bowel function. Turbid fluid or fluid-containing feces is discharged from the wound or drain. Elevated peripheral blood leukocytes or significant elevation of C-reactive protein, CT examination is the gold standard for the diagnosis of anastomotic leakage, showing fluid or gas collection around the anastomosis and enhanced images of the suspected leakage site.
Inclusion criteria: (1) patients with a confirmed diagnosis of rectal cancer based on postoperative pathological examination, treated between January 2017 and December 2024 at the two study centers; (2) all surgical procedures were conducted by a consistent and specialized surgical team; (3) patients with comprehensive clinicopathological data and complete follow-up records; (4) patients aged 18 years or older. Exclusion criteria: (1) patients identified with distant metastasis of the tumor during preoperative assessment or intraoperative exploration, precluding radical surgical intervention; (2) patients diagnosed with two or more concurrent malignant tumors during either the preoperative or postoperative follow-up period; (3) patients who underwent emergency surgical procedures due to complications such as tumor perforation, obstruction, or bleeding; (4) patients with a diagnosis of familial adenomatous polyposis.
This study was conducted across two centers, involving a comprehensive preoperative evaluation for all patients, which included imaging studies, laboratory tests, and detailed medical history assessments, to determine their suitability for undergoing robotic-assisted rectal cancer surgery. All robotic-assisted rectal cancer surgeries were performed by experienced surgical teams from Fujian Medical University Union Hospital and Longyan First Affiliated Hospital of Fujian Medical University. Each attending surgeon on the surgical team had received specialized training in robotic surgery and possessed extensive experience in performing robotic-assisted procedures. The lead surgeon operated the Da Vinci Surgical System throughout the surgeries, ensuring precise and effective surgical interventions. The main indications for preoperative neoadjuvant therapy for patients with locally advanced rectal cancer (LARC) that is resectable include patients with clinical stage cT3-4 or with regional lymph node metastasis and no distant metastasis. Especially for patients with low rectal cancer, if the tumor location is close to the anal sphincter and the patient has a strong willingness to preserve the anus, neoadjuvant therapy can be chosen after full communication with the patient, and the surgical plan can be decided according to the evaluation of the efficacy after surgery . Patients of advanced age or with comorbid metabolic disease should have the LCA preserved whenever possible, as the colonic border arterial arch in these patients may have a restricted blood supply due to atherosclerosis or microangiopathy, and preservation of the LCA improves anastomotic perfusion proximally, thereby reducing the risk of anastomotic leakage. In patients with rectal cancer after neoadjuvant therapy, the decision to preserve the LCA is based on the patient’s underlying condition. In patients with familial adenomatous polyposis or Lynch syndrome, preservation of the LCA reserves vascularity for possible future reoperation . Patients with descending colonic rotation have an anatomically abnormal vascular arch, and not preserving the LCA may result in extensive intestinal ischemia. Failure to preserve LCA and positive IMA root lymph nodes are independent risk factors for distant recurrence after rectal cancer surgery . To ensure complete lymph node dissection, the LCA is preserved in such patients. When mesenteric tension is too high and free bowel is insufficient to complete a low-tension anastomosis, we consider dissecting the LCA to reduce anastomotic tension. Preoperative bowel preparation is performed in strict accordance with international guidelines and our center’s standardized procedures. Implementation details of bowel preparation: mechanical bowel preparation : the day before surgery, patients take a polyethylene glycol electrolyte solution for bowel cleansing, with the dose adjusted to the patient’s weight. Bowel cleansing is usually recommended to be completed within 12–24 h prior to surgery to ensure that intraoperative bowel contents are minimized. The principles of antibiotic administration in this study were as follows: the first dose of broad-spectrum antibiotics was given intravenously within 30 to 60 min after the induction of anesthesia to ensure that blood levels reached effective levels at the start of surgery . In dual-center patients requiring surgery, the following should be implemented: if the duration of surgery is more than 3 h or the intraoperative bleeding is more than 1500 ml, the antibiotic dose should be increased according to the duration of surgery and the amount of bleeding. Postoperative antibiotics should not be given for more than 24 h. In patients with severe intraoperative contamination or complex surgery, this may be extended to 48 h. The choice of antibiotics is based on the coverage of the target strain and the individual patient. The indications for stoma creation have been formulated according to international guidelines and clinical practice , taking into account the specific situation of the patient, and mainly include patients with high risk of anastomotic leakage, such as patients with very low tumor location, patients who received neoadjuvant therapy prior to the operation, patients with high intraoperative anastomotic tension, or patients with poor blood supply. Patients are with poor systemic conditions, such as advanced age, malnutrition, or severe comorbidities. Patients are with poor intraoperative bowel conditions, high risk of contamination of bowel contents, or inadequate intraoperative bowel cleansing. The placement of an abdominal drain is determined on a case-by-case basis, mainly in patients with high-risk anastomotic leaks, cases with significant intraoperative contamination, or heavy bleeding. Patients require real-time monitoring of the nature and amount of intra-abdominal fluid. Anal drains are mainly used in low anastomotic surgery, especially in anal preservation surgery, to reduce localized anastomotic fluid collection and to reduce pressure. Abdominal drains are usually removed within 48–72 h postoperatively with clear drainage or significantly reduced volume and no signs of infection. Anal drains are usually removed within 2–3 days after surgery, but the exact time depends on the patient’s recovery. The standard protocol for regular postoperative follow-up for patients in this study is as follows: patients were reviewed every 3–6 months during the first 2 years post-surgery, every 6 months during the 3rd and 4th years, and subsequently every 6–12 months from the 5th year onwards. Laboratory tests conducted during follow-up include routine blood tests, routine biochemical tests, and serum tumor markers, among others. Imaging examinations performed during follow-up consist of ultrasound, computed tomography (CT), positron emission tomography (PET) (specifically for the detection of recurrent lesions), and gastroenteroscopy. The survival follow-up methodology adhered to the guidelines of the American Joint Committee on Cancer (AJCC) . Patients were followed up through multiple methods, including outpatient visits, the electronic medical record system, and telephone callbacks, to ensure the completeness and accuracy of the data.
Continuous variables were expressed as mean ± standard deviation, and comparisons of categorical variables between the two groups were performed using the chi-square test (chi-square test). Survival curves for overall survival (OS) and disease-free survival (DFS) were generated by the Kaplan–Meier method and compared by the log-rank test. OS was defined as the time interval from the date of surgical treatment to the date of death or the date of the last follow-up. DFS was defined as the time interval from the date of surgical intervention to the date of tumor recurrence, metastasis, death, or the date of the last follow-up. Multivariate survival analyses were performed using Cox proportional hazards regression models to identify independent prognostic factors for OS. Logistic univariate regression analysis was used to identify potential risk factors for SSI. Variables with a P -value of less than 0.05 in the univariate analysis were subsequently included in a multivariate logistic regression model to identify independent predictors of SSI. For risk stratification, chi-square tests were used to assess the association between the number of risk factors identified in the multivariate analyses (0–1, 2, or 3) and the risk of SSI. In addition, relative risk (RR) and corresponding 95% confidence intervals (CI) were calculated using the rest of the population as the reference group (RR = SSI risk in patients with N risk factors / SSI risk in the rest of the population). Statistical significance was defined as P < 0.05. All statistical analyses were performed using R software (version 4.3.1).
Baseline clinicopathological characteristics In this study, patients were stratified into two cohorts according to the presence or absence of surgical site infections (SSIs): the non-SSI group ( n = 316) and the SSI group ( n = 44). Within the SSI group, the infections were further categorized as incisional infections ( n = 5), abdominal infections ( n = 32), and combined incisional and abdominal infections ( n = 7). Notable differences were observed between the groups in several clinicopathological characteristics. The SSI group exhibited a significantly higher proportion of underweight patients (25.00% vs. 9.49%, P = 0.009). Furthermore, the preservation of the left colonic artery was significantly more prevalent in the non-SSI group ( P < 0.001). The incidence of neoadjuvant chemoradiotherapy (nCRT) was also significantly higher in the non-SSI group (58.23% vs. 40.91%, P = 0.03). A significant variation in the surgical approach was observed between the groups ( P = 0.026), with a larger proportion of patients in the SSI group undergoing low anterior resection (63.64%) compared to other surgical techniques. Analysis of the pathological T-stage (pT) revealed a significant difference between the two groups ( P = 0.025). Although no significant difference was found in lymph node status (pN) between the groups, the SSI group had a slightly higher proportion of patients classified as pN1 and pN2. Perineural invasion (PNI) was significantly more prevalent in the SSI group (36.36% vs. 12.03%, P < 0.001). Postoperative pulmonary infections were markedly more frequent in the SSI group compared to the non-SSI group (22.73% vs. 3.48%, P < 0.001). While the incidence of small bowel obstruction was higher in the SSI group, the difference did not reach statistical significance (4.55% vs. 0.95%, P = 0.056). Furthermore, the incidence of SSI after low anterior resection (LAR) was 63.63%, but the incidence of LAR with stoma was 19.17%. Five of 31 anastomotic leaks were diagnosed as SSI (11.36%). No significant differences were found between the groups regarding the incidence of anastomotic leakage or anastomotic bleeding. Additionally, there were no significant differences between the groups in terms of age, gender, hypertension, diabetes, or the distance between the tumor and the anal verge (Table ). Survival analysis The relationship between clinicopathological factors and overall survival (OS) at 1, 3, and 5 years following robot-assisted radical rectal cancer surgery was analyzed. Age, BMI, hypertension, pathological T-stage, neoadjuvant chemoradiotherapy, pulmonary infection, and surgical site infection (SSI) were found to be significantly associated with overall survival according to the log-rank test ( , P < 0.05). The median follow-up times were 32 months for the non-SSI group and 22 months for the SSI group. The 3-year overall survival rate was significantly lower in the SSI group compared to the non-SSI group (74.5% vs. 93.1%; P < 0.001). However, there were no significant differences in disease-free survival (DFS) between the two groups (Fig. A, ). Overall survival was significantly lower in patients who experienced abdominal infections compared to those without such infections, while the impact on disease-free survival did not reach statistical significance (Fig. C, ). In contrast, incisional infections were not associated with a significant difference in either overall survival or disease-free survival (Fig. E, ). Univariate and multivariate analyses for surgical site infection In this study, both univariate and multivariate logistic regression analyses were conducted to identify factors associated with the development of SSI following robot-assisted radical rectal cancer surgery (Table ). The univariate analysis revealed several factors significantly correlated with an increased risk of SSI. Underweight patients exhibited a higher likelihood of developing SSI (OR 2.99, 95% CI 1.34–6.67, P = 0.007). Additionally, patients with advanced pathological T stages, particularly those with T4 tumors, demonstrated a markedly elevated risk of SSI (OR 3.40, 95% CI 1.03–11.24, P = 0.045). The presence of positive perineural invasion (PNI) was strongly associated with the occurrence of SSI (OR 4.18, 95% CI 2.07–8.43, P < 0.001). Conversely, certain factors were associated with a reduced likelihood of SSI. Patients who underwent low anterior resection had a significantly lower risk of SSI compared to those who underwent abdominoperineal resection (OR 0.21, 95% CI 0.06–0.72, P = 0.012). Similarly, preservation of the left colic artery (LCA) (OR 0.20, 95% CI 0.09–0.44, P < 0.001) and the administration of neoadjuvant therapy (OR 0.50, 95% CI 0.26–0.94, P = 0.032) were both associated with a decreased risk of SSI. In the multivariate logistic regression analysis, positive perineural invasion (OR 3.45, 95% CI 1.48–8.05, P = 0.004) emerged as independent risk factors for the increased incidence of SSI. Conversely, low anterior resection (OR 0.21, 95% CI 0.06–0.72, P = 0.012), preservation of LCA (OR 0.33, 95% CI 0.13–0.85, P = 0.021), and the administration of neoadjuvant therapy (OR 0.38, 95% CI 0.18–0.82, P = 0.014) remained significantly associated with a reduced risk of SSI. Univariate and multivariate analyses for overall survival In this study, univariate and multivariate Cox regression analyses were employed to identify factors influencing overall survival following robot-assisted radical rectal cancer surgery (Table ). The univariate Cox regression analysis revealed several factors significantly associated with diminished overall survival. Patients aged 50 years or older demonstrated a higher hazard ratio (HR) for reduced overall survival (HR 7.57, 95% CI 1.02–56.1, P = 0.048). The presence of hypertension was also significantly correlated with poorer overall survival outcomes (HR 3.63, 95% CI 1.63–8.11, P = 0.002). Although neoadjuvant chemoradiotherapy exhibited a protective trend, it did not reach statistical significance (HR 0.45, 95% CI 0.19–1.02, P = 0.055). Lung metastasis was associated with a significantly heightened risk of adverse outcomes (HR 4.72, 95% CI 1.76–12.65, P = 0.002). Patients with bone metastasis exhibited a markedly elevated risk (HR 11.80, 95% CI 3.52–39.59, P < 0.001) in comparison to those without bone metastasis. Additionally, the occurrence of SSI was significantly associated with decreased overall survival (HR 3.57, 95% CI 1.39–9.14, P = 0.008). In the multivariate Cox regression analysis, hypertension (HR 2.63, 95% CI 1.14–6.05, P = 0.023) and bone metastasis (HR 9.89, 95% CI 1.73–56.49, P = 0.010) remained independent predictors of reduced overall survival. Furthermore, the presence of SSI continued to be independently associated with a significant reduction in overall survival (HR 3.43, 95% CI 1.30–9.04, P = 0.012). Risk stratification for SSI According to the logistic multivariate analysis, there were three significant correlates (perineural invasion, preservation of the left colonic artery, and neoadjuvant chemoradiotherapy) that could be used to stratify SSI. The number of risk factors was significantly associated with an increased risk of SSI ( P < 0.001). Compared with the overall population, patients with 1 or fewer risk factors had a lower risk, whereas patients with 2 or more risk factors had a significantly higher risk. The risk of SSI was 10.06% for patients with 0–1 risk factor (RR = 0.823, 95% CI 0.155–0.407) and 33.33% for patients with 2 risk factors (RR = 2.727. 95% CI 0.155–0.407). Table shows the relationship between the number of risk factors and the risk of SSI.
In this study, patients were stratified into two cohorts according to the presence or absence of surgical site infections (SSIs): the non-SSI group ( n = 316) and the SSI group ( n = 44). Within the SSI group, the infections were further categorized as incisional infections ( n = 5), abdominal infections ( n = 32), and combined incisional and abdominal infections ( n = 7). Notable differences were observed between the groups in several clinicopathological characteristics. The SSI group exhibited a significantly higher proportion of underweight patients (25.00% vs. 9.49%, P = 0.009). Furthermore, the preservation of the left colonic artery was significantly more prevalent in the non-SSI group ( P < 0.001). The incidence of neoadjuvant chemoradiotherapy (nCRT) was also significantly higher in the non-SSI group (58.23% vs. 40.91%, P = 0.03). A significant variation in the surgical approach was observed between the groups ( P = 0.026), with a larger proportion of patients in the SSI group undergoing low anterior resection (63.64%) compared to other surgical techniques. Analysis of the pathological T-stage (pT) revealed a significant difference between the two groups ( P = 0.025). Although no significant difference was found in lymph node status (pN) between the groups, the SSI group had a slightly higher proportion of patients classified as pN1 and pN2. Perineural invasion (PNI) was significantly more prevalent in the SSI group (36.36% vs. 12.03%, P < 0.001). Postoperative pulmonary infections were markedly more frequent in the SSI group compared to the non-SSI group (22.73% vs. 3.48%, P < 0.001). While the incidence of small bowel obstruction was higher in the SSI group, the difference did not reach statistical significance (4.55% vs. 0.95%, P = 0.056). Furthermore, the incidence of SSI after low anterior resection (LAR) was 63.63%, but the incidence of LAR with stoma was 19.17%. Five of 31 anastomotic leaks were diagnosed as SSI (11.36%). No significant differences were found between the groups regarding the incidence of anastomotic leakage or anastomotic bleeding. Additionally, there were no significant differences between the groups in terms of age, gender, hypertension, diabetes, or the distance between the tumor and the anal verge (Table ).
The relationship between clinicopathological factors and overall survival (OS) at 1, 3, and 5 years following robot-assisted radical rectal cancer surgery was analyzed. Age, BMI, hypertension, pathological T-stage, neoadjuvant chemoradiotherapy, pulmonary infection, and surgical site infection (SSI) were found to be significantly associated with overall survival according to the log-rank test ( , P < 0.05). The median follow-up times were 32 months for the non-SSI group and 22 months for the SSI group. The 3-year overall survival rate was significantly lower in the SSI group compared to the non-SSI group (74.5% vs. 93.1%; P < 0.001). However, there were no significant differences in disease-free survival (DFS) between the two groups (Fig. A, ). Overall survival was significantly lower in patients who experienced abdominal infections compared to those without such infections, while the impact on disease-free survival did not reach statistical significance (Fig. C, ). In contrast, incisional infections were not associated with a significant difference in either overall survival or disease-free survival (Fig. E, ).
In this study, both univariate and multivariate logistic regression analyses were conducted to identify factors associated with the development of SSI following robot-assisted radical rectal cancer surgery (Table ). The univariate analysis revealed several factors significantly correlated with an increased risk of SSI. Underweight patients exhibited a higher likelihood of developing SSI (OR 2.99, 95% CI 1.34–6.67, P = 0.007). Additionally, patients with advanced pathological T stages, particularly those with T4 tumors, demonstrated a markedly elevated risk of SSI (OR 3.40, 95% CI 1.03–11.24, P = 0.045). The presence of positive perineural invasion (PNI) was strongly associated with the occurrence of SSI (OR 4.18, 95% CI 2.07–8.43, P < 0.001). Conversely, certain factors were associated with a reduced likelihood of SSI. Patients who underwent low anterior resection had a significantly lower risk of SSI compared to those who underwent abdominoperineal resection (OR 0.21, 95% CI 0.06–0.72, P = 0.012). Similarly, preservation of the left colic artery (LCA) (OR 0.20, 95% CI 0.09–0.44, P < 0.001) and the administration of neoadjuvant therapy (OR 0.50, 95% CI 0.26–0.94, P = 0.032) were both associated with a decreased risk of SSI. In the multivariate logistic regression analysis, positive perineural invasion (OR 3.45, 95% CI 1.48–8.05, P = 0.004) emerged as independent risk factors for the increased incidence of SSI. Conversely, low anterior resection (OR 0.21, 95% CI 0.06–0.72, P = 0.012), preservation of LCA (OR 0.33, 95% CI 0.13–0.85, P = 0.021), and the administration of neoadjuvant therapy (OR 0.38, 95% CI 0.18–0.82, P = 0.014) remained significantly associated with a reduced risk of SSI.
In this study, univariate and multivariate Cox regression analyses were employed to identify factors influencing overall survival following robot-assisted radical rectal cancer surgery (Table ). The univariate Cox regression analysis revealed several factors significantly associated with diminished overall survival. Patients aged 50 years or older demonstrated a higher hazard ratio (HR) for reduced overall survival (HR 7.57, 95% CI 1.02–56.1, P = 0.048). The presence of hypertension was also significantly correlated with poorer overall survival outcomes (HR 3.63, 95% CI 1.63–8.11, P = 0.002). Although neoadjuvant chemoradiotherapy exhibited a protective trend, it did not reach statistical significance (HR 0.45, 95% CI 0.19–1.02, P = 0.055). Lung metastasis was associated with a significantly heightened risk of adverse outcomes (HR 4.72, 95% CI 1.76–12.65, P = 0.002). Patients with bone metastasis exhibited a markedly elevated risk (HR 11.80, 95% CI 3.52–39.59, P < 0.001) in comparison to those without bone metastasis. Additionally, the occurrence of SSI was significantly associated with decreased overall survival (HR 3.57, 95% CI 1.39–9.14, P = 0.008). In the multivariate Cox regression analysis, hypertension (HR 2.63, 95% CI 1.14–6.05, P = 0.023) and bone metastasis (HR 9.89, 95% CI 1.73–56.49, P = 0.010) remained independent predictors of reduced overall survival. Furthermore, the presence of SSI continued to be independently associated with a significant reduction in overall survival (HR 3.43, 95% CI 1.30–9.04, P = 0.012).
According to the logistic multivariate analysis, there were three significant correlates (perineural invasion, preservation of the left colonic artery, and neoadjuvant chemoradiotherapy) that could be used to stratify SSI. The number of risk factors was significantly associated with an increased risk of SSI ( P < 0.001). Compared with the overall population, patients with 1 or fewer risk factors had a lower risk, whereas patients with 2 or more risk factors had a significantly higher risk. The risk of SSI was 10.06% for patients with 0–1 risk factor (RR = 0.823, 95% CI 0.155–0.407) and 33.33% for patients with 2 risk factors (RR = 2.727. 95% CI 0.155–0.407). Table shows the relationship between the number of risk factors and the risk of SSI.
This study examined the risk factors associated with the development of surgical site infection (SSI) after robot-assisted radical rectal cancer surgery and investigated how these factors affect the survival prognosis of patients. The findings suggest that the occurrence of SSI has a considerable impact on both postoperative recovery and long-term survival, and that a number of clinicopathological factors are significantly correlated with the incidence of SSI and overall survival prognosis. The risks associated with colonic and rectal surgeries vary considerably, particularly due to the unique technical demands of rectal procedures. Rectal surgeries frequently require the formation of an ostomy, preoperative chemoradiotherapy, and total mesorectal excision (TME), often with a low anastomosis near the anal verge. These factors tend to prolong operative time and increase the potential for bacterial contamination . Studies by Guillou and Biondo report a SSI rate of 9%, with the incidence of intra-abdominal or pelvic sepsis reaching 10%. Furthermore, the lower incidence of SSI in low anterior resection (LAR) compared to abdominoperineal resection (APR) is consistent with previous literature findings . This variation may be attributed to differences in surgical techniques, anatomical positioning, and the degree of tissue trauma involved in each procedure . The introduction of robot-assisted technology in LAR has markedly improved surgical precision. Given the deep location of the lower rectum within the pelvis and the complexity of its surrounding anatomy, the three-dimensional high-definition visualization and enhanced dexterity of robotic instruments enable surgeons to navigate confined spaces with greater accuracy. This is particularly advantageous when operating around delicate structures such as nerves and blood vessels. Park et al. highlight that the increased precision afforded by robotic assistance minimizes tissue trauma and subsequently reduces the incidence of postoperative complications . In contrast, APR necessitates a more extensive resection, involving both abdominal and perineal incisions and often resulting in a permanent ostomy. The broader surgical exposure and greater complexity inherent in APR contribute to a higher risk of infection. Factors such as surgical access, emergency surgery, duration of surgery, massive intraoperative blood loss, malnutrition, and diabetes mellitus were significantly associated with the occurrence of SSI in both open and laparoscopic surgeries . The lower incidence of SSI in laparoscopic surgery compared to open surgery may be related to the fact that laparoscopic surgery is less invasive, has a smaller incision, and has a faster postoperative recovery . In open surgery, the incision length and the degree of intraoperative contamination have a more significant effect on SSI. This study provides a new perspective to think about incisional infections. Perineural invasion (PNI) is widely regarded as a key risk factor in tumor pathology as it promotes direct interaction of tumor cells with nerve cells. This interaction promotes deep invasion of malignant cells into surrounding tissues, disrupting the structural integrity of the tissue and weakening immune defenses, thereby increasing the risk of infection . Tumor invasion often penetrates deep into the peripheral neural structures, which allows for a wider surgical resection involving more layers of tissue, which in turn leads to a prolonged surgical time. In PNI-positive patients, tumor invasion into surrounding tissues and damage to local blood vessels and nerves further exacerbate the problem of intraoperative blood loss and local tissue hypoxia, which increases the risk of infection development . Prolonged intraoperative manipulation and massive blood loss can lead to impaired function of the immune system, especially the weakening of the local immune response, which compromises wound healing and thus increases the risk of postoperative surgical site infections. The combination of immune system suppression and surgical trauma as well as intraoperative blood loss may make it easier for bacteria to colonize the surgical site and cause infection . Therefore, PNI-positive patients with locally advanced cancer may be more susceptible to SSIs due to the longer duration of surgery, higher intraoperative blood loss, and impaired local immune function, suggesting that clinics should provide more detailed postoperative management for these patients, strengthen infection prevention and control measures, and closely monitor the postoperative recovery process. The findings of this study indicate that preservation of the left colonic artery, along with neoadjuvant therapy, may serve as a protective factor in reducing the incidence of SSI. The left colonic artery is the primary source of blood supply to a specific segment of the colon. Preserving this artery during surgery ensures adequate perfusion to the distal colon and the anastomotic region, which facilitates tissue healing and reduces the risk of anastomotic leakage due to ischemia. This observation is consistent with our previous study . Most studies have identified neoadjuvant therapy as a risk factor for SSI . Bailey’s research highlights the critical role of angiogenesis in the growth, progression, and metastasis of various solid tumors. Since angiogenesis is also essential for wound healing, pharmacological agents that target the angiogenic pathway may inadvertently disrupt this process, potentially increasing the risk of postoperative complications such as wound dehiscence, surgical site bleeding, and infection . However, other studies have indicated that neoadjuvant therapy does not significantly influence the incidence of SSI . The preoperative treatment group may tend to select patients who are in better physical condition, have fewer comorbidities, or are able to tolerate radiotherapy, and the underlying health status of these patients may help to reduce the risk of postoperative infections. Neoadjuvant therapy patients usually receive stricter perioperative management, including nutritional support, optimization of immune function, and more frequent postoperative monitoring, which may indirectly reduce the incidence of infection. The results of the present study may reflect more of an advantage in specific clinical practice conditions and deserve further validation and exploration. This study has several limitations. Firstly, although the study employed a dual-center design, the small sample size may have constrained the statistical power and generalizability of the findings. This limitation may have affected the ability to comprehensively assess certain variables and their associations with SSI and prognosis. Secondly, the absence of some key surgery-related variables may have restricted a thorough analysis of the mechanisms underlying SSI occurrence and its associated risk factors. Third, we did not include a more detailed nutritional assessment tool, and our exploration of the relationship between nutrition and SSI was incomplete. The inclusion of this variable needs to be focused on in future studies. Additionally, the relatively short follow-up period may be insufficient to fully evaluate the long-term impact of SSI on patient outcomes. Therefore, these findings should be interpreted with caution and require further validation in studies that incorporate a broader range of surgery-related variables and extended follow-up durations. Finally, due to the geographic and institutional constraints of the sample, the external validity of the results must be confirmed through large-scale, multicenter studies to ensure applicability across diverse populations.
In conclusion, this study suggests that surgical site infections may be a key factor in the prognosis of patients with rectal cancer. The occurrence of SSI was significantly associated with poorer overall survival outcomes, suggesting their important role in postoperative recovery and long-term survival. This result highlights the need for prevention and management of SSI during the treatment of rectal cancer to improve the long-term prognosis of patients.
Below is the link to the electronic supplementary material. Supplementary file1 Supplementary 1 Clinicopathological characteristics and overall survival at 1, 3, and 5 years after robot-assisted radical rectal cancer surgery. (DOCX 36 KB)
|
A pandemic response to home delivery for ambulatory ECG monitoring: Development and validation | e9ee6aca-7654-4dd7-a49c-0210e788a9c2 | 8414195 | Patient Education as Topic[mh] | Long-term ambulatory ECG monitoring devices are used for the acquisition of symptom-rhythm correlation in the assessment of rhythm related symptoms and for detection of suspected clinically relevant arrythmias, especially atrial fibrillation or, as a surrogate atrial ectopy burden . There are a variety of manufacturing systems available for such data acquisition. More recently, patch electrode systems have become available. These are available in a single, or multiple lead data acquisition configurations and allow for the reliable acquisition of continuous ECG data for up to a two-week period. Such devices are small, leadless and depending on manufacturing configurations can be mailed back to the manufacturer for refurbishment after downloaded data is uploaded to a cloud server or other interface . In routine use of these devices, patients receive the device in a clinic, are educated on its use, instructed how to activate the device for symptom correlation and, also instructed how to take off the device and reapply as needed. At the time of clinic application, the skin is prepared to enhance electrode contact and the reason for obtaining the data is reviewed. In the aggregate, the current standard application of ambulatory ECG recording devices requires expert clinical supervision, patient education and delivery of the device in an office or clinic. The global COVID-19 pandemic has forced healthcare providers to pivot their care to telemedicine and video conferencing capabilities . We wished to assess whether a similar approach could be provided for home ECG long-term telemetry data acquisition. A protocol was developed, instructional materials were created and models for internal validation were designed to answer this question. We hypothesized that the simplicity of a mail-out patch electrode long-term (14 day) ECG recording configuration should allow for reliable, entirely home-based delivery of education, service and data acquisition.
The patient population studied was derived from a specialty neurology clinic focused on stroke management. The descriptions of both patient populations were limited to age, gender and base line rhythms. We developed educational materials including a pamphlet and a 5 min video to educate patients on how to apply and use the device. The ordering physician met with the patient by videoconference to explain the need for long-term ECG recording data and then, notified the manufacturer (Icentia, Quebec, QC, Canada) to deliver the package to the patients. Upon notification, a package was sent using regular priority postage with tracking capabilities. The package included: a single lead adhesive long term ECG recording device (CardioSTAT, Icentia, Quebec, QC, Canada), 2 sets of electrodes, the surrounding adhesive collar, a patient diary form, a pre-addressed return envelope, written instructions with diagrams as well as a contact phone number to call manufacturer for any technical support (see ). All devices were placed in a vertical orientation along the sternum (see ) . The instruction guide also included a URL reference to a 5 min video for support as needed (available at www.cardiostat.com/support (last accessed July 3, 2020)).
Once an ECG recording system package was mailed to the patient, the data base entry for the tracking of the package started. By protocol, all patients received a single phone call, upon shipping to provide information regarding the device including the company's contact information. The phone call was used to remined the patient of the role of device in management, review use and encourage prompt application. At that time, patients confirm and consent to wearing the device upon arrival. All patients were invited to phone the manufacturer if any challenges arose and the nature of the phone call was abstracted. If the recorder was not returned to the company by a specific date, the patient received their only other phone call to ensure they mailed the device back. For comparison, a retrospective approach was used to identify a control group made up of a sequential series of patients, seen just before the pandemic response who received the same long term ECG recording device in the standard, conventional fashion with in-clinic education, skin preparation and device application were studied. Both groups were otherwise identical with respect to a physician prescribed device, and clinical location in an academic stroke clinic with the main goal being identification of atrial fibrillation in the context of concern for cardioembolic cause of stroke. All of the Holter data was reviewed and reported by the same physician. The primary outcome was noise magnitude and secondary outcome was APB burden and hours recorded. In order to express the APB burden among patients with variations in the quantity of noise signal, the amount of ectopy was normalized to the hours of data available for analysis. This has been represented by absolute APB count per total recorded hours. The chief endpoint for purposes of comparison and validation point was: 1. The magnitude of noise on the record signal from the mailed in devices compared to the in clinic deliver devices 2. Arrhythmia related indices of atrial ectopy burden. Total atrial ectopy count was measured and expressed as: APB count per recorded hour of data acquired 3. The frequency of manual activations for symptoms 4. Hours of recorded data available
All variables were expressed as means with standard deviations (SD). Standard descriptive statistics were used to compare the two populations using a Mann Whitney U test.
The 47 patients who received the device in the mail were compared to 47 patients in the control group. The 47 recipients with mail delivery had an average age of 70 ± 14.7 years and was 49% male. The 47 patients used for the comparison control group had an average age of 65 ± 15 years and was 55% male. All patients in both populations were in sinus rhythm. The two groups were not statistically different from each other. The 47 machines were sent out from March 27th to May 11th, 2020. All devices shipped out were returned and had reports delivered. Of the total shipped out, 47 were returned. Of those that were returned, 25 patients (53%) installed the devices using instructional materials given without additional assistance over the phone and 21 patients (45%) required help over the phone to install. The status of one patient (2.1%) is unknown to have required additional help. One device (2.1%) was wrongly addressed and one device (2.1%) was sent to a patient that was unaware of the test, however, both reports and devices were returned. The magnitude of noise on mailed devices was an average of 22 ± 21% compared to 26 ± 14% on control group, U = 848, p = 0.052. Patients with mailed devices had an APB burden of 37.05 ± 95.5 APB per recorded hours. Control patients were similar with respect to an APB burden of 23.3 ±42.8 APB per recorded hours, U = 669, p = 0.465. The mean number of hours recorded for patients with mailed devices was 240.37 ±78.3 h. Control patients had a mean number of hours recorded of 245.05 ±46.7 h, U = 1032, p = 0.589. Manual activations for symptoms occurred at an average of 10 and 8 times, respectively.
The main finding of this study is that a simple and effective protocol can be quickly developed to deliver home ECG recording technology in a reliable fashion with very limited involvement of healthcare professionals. Using only home printed materials, one phone call, and as-needed access to a simple video, patients can receive, apply, and record valid data for long-term ambulatory ECG recorders. Using data points such as magnitude of noise, ABP burden per hour recorded, and frequency of manual activation, this study shows a non-significant statistical difference between home-based vs clinic-based application. The quality of data is commensurate with that of the patients who received their device in the clinic in a conventional format. Our goal was to assess the least intensive intervention possible and still obtain adequate quality data. This occurred with a high patient compliance rate and limited loss to follow-up. With only one phone call, a printed instruction manual, and a pre-supplied return label, a large percentage of patients complied with the instructions and mailed back the device. We are not aware of any prior report establishing the validity of home delivered, self-taught acquisition of ECG data in a clinical context. Our study underscores the validity of this approach with even minimal amount of contact required to reliably obtain such data. In another study, a different home delivered 7-day recording ambulatory ECG recording device was used in a trial to validate a comparison signal obtained from continuous event activated patient wearable device and also found high compliance for mailed out ECG recording devices. This study did not detail the intensity of the home ECG instruction and intervention. However, the research question required by design more intensive efforts to ensure that acquisition of Holter data was comprehensive and performed correctly . Another trial using a mail-out home ECG recording device had a voluntary, more intense web-based module showing compliance to home monitoring. Unlike our study, there was no exclusive comparison between clinic and home-based application .
The patient population represents a selected group from a single stroke clinic which can give raise to possible selection and referral bias. In particular, the population studied carries psychological and clinical fears regarding stroke prevention and causation, which may influence the compliance and high response rate with minimal intervention. Whether the data can be generalized to a less clinically charged context is not known. Our comparison of the two groups is limited only to age, gender and rhythm status, no other factors were compared between the two populations. There is no reason to suspect systematic differences between the two populations in the clinical attributes expected of the two groups studied who differ only by date of referral for monitoring. Although manual activations were preformed and were similar in both groups, detailed assessment of clinical context or even intent behind such activations is only partly available with limited patient diary methods.
The global pandemic has forced the rapid development of telemedicine protocols. In this study, the development and delivery of the simple long-term home ECG monitoring protocol was effective and reliable for continuous home ambulatory ECG data acquisition. The quality of data matches clinically applied ECG monitoring devices. Using simple instructions, as well as a standard constructed single phone prompt, and as-needed contact, patients were able to properly use the device to acquire symptom–rhythm correlation and ECG recordings equivalently between home delivery and clinic application.
This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.
|
Bringing theory to life: integrating case-based learning in applied physiology for undergraduate physiotherapy education | 4464b632-5102-4226-877f-cf1ff44b0d9e | 11827333 | Physiology[mh] | The fundamental medical sciences, including applied physiology, constitute a foundational element of medical curricula, providing essential understanding of the human body’s biological underpinnings, various diseases, and associated therapies . Physiotherapy students rely on this basic knowledge as they develop their clinical expertise. However, there is growing concern that TTMs in physiotherapy education fail to yield optimal learning outcomes . Integrating basic sciences with clinical relevance from the outset of education is believed to enhance information retention and facilitate its application in clinical settings . In the traditional framework of physiotherapy education, basic sciences, including medical physiology, are typically taught in the initial two years of undergraduate studies with limited interdisciplinary interaction. This approach may detrimentally impact students’ perception and long-term retention of foundational scientific knowledge . The inability to connect foundational knowledge with clinical contexts may result in graduates lacking critical thinking and problem-solving skills essential for effective clinical practice . Moreover, senior undergraduate physiotherapy students frequently express informal dissatisfaction with their memory of basic medical sciences and struggle to correlate this content with later clinical curricula . As physiotherapy students’ progress through their education, their perceptions of foundational courses often become increasingly negative, highlighting a potential flaw in the educational system where acquired knowledge risks becoming inaccessible and inert . Research indicates that basic science knowledge acquired within a clinical context is more readily applied and comprehended by students . Despite significant efforts over decades, the practical implementation of integration remains challenging. Case-based learning (CBL) emerges as a promising alternative, characterized by interactivity and student autonomy, potentially fostering greater enthusiasm for learning . Recent studies have explored the potential benefits of combining CBL with TTMs in physiotherapy education, contrasting with traditional didactic lectures and practical classes that are often teacher-centered with minimal student engagement . Monitoring student perception throughout undergraduate courses may inform recommendations for better integration of basic sciences within clinical subjects, facilitating the unified application of foundational knowledge to clinical scenarios. By integrating CBL approach alongside TTMs in applied physiology, for physiotherapy students may enhance students’ ability to apply theoretical knowledge to clinical scenarios, thereby improving their academic performance and perception. Hence, the current study aims to integrate CBL with TTMs in teaching applied physiology for undergraduate physiotherapy students and evaluate the impact of this combined hybrid approach on student perceptions and academic performance, comparing it to the application of TTMs alone. Study design This is an interventional study that was conducted at the Faculty of Physiotherapy, AlSalam University, during the period of January to May 2023, on the undergraduate physiotherapy students during the neuroscience course. Ethical considerations This study was conducted in accordance with the Declaration of Helsinki. Setting and participants Study participants and eligibility A cohort comprising 244 undergraduate physiotherapy students in their fourth semester, who were enrolled in a neuroscience course were recruited for the present study. There were no exclusion criteria. Sample size The study included all fourth-semester students ( n = 244), with an expected dropout rate of 5%. A formal sample size calculation was not performed, as the objective was to include all eligible students. The facilitators Nine volunteer physiologists with expertise in applied physiology served as facilitators for the CBL sessions. Most had prior experience with interactive teaching methodologies, including problem-based learning (PBL) and small-group discussions. Facilitators training To ensure effective CBL implementation, the facilitators underwent a comprehensive orientation and training program selectively tailored for applied physiology teaching through CBL. The training program spanned over two weeks and covered the following key dimensions: Theoretical training and orientation sessions Included an in-depth understanding of the CBL methodology. The training focused on clarification of the CBL principles in applied physiology. They received guidance on their roles and responsibilities through CBL sessions, including facilitating knowledge delivery through case scenarios and discussions. They were encouraged to act as guides rather than teachers. The facilitators were trained to actively monitor group dynamics, check their progress, supervise their discussions, encourage the active participation of students, and provide guidance as needed. Facilitators were trained for the critical analysis of clinical cases, formulating relevant questions that trigger critical thinking, and how to guide discussions to maximize students’ learning. Practical training Hands-on workshops simulating CBL sessions were arranged, where facilitators practiced moderating discussions, monitoring group dynamics, and guiding students through case analyses. Additionally, facilitators have received training in formulating relevant questions that trigger critical thinking and guide discussions. Assessment of preparedness Facilitators participated in mock sessions, where their skills in engaging students, encouraging participation, and managing discussions were evaluated and refined. Ongoing professional support for facilitators Throughout the semester, facilitators benefited from a comprehensive support system that included regular mentoring sessions and weekly debriefing meetings to address emerging challenges. A robust peer support network was established among facilitators, fostering collaborative problem-solving and shared learning experiences. Additionally, facilitators had continuous access to curated educational resources and materials to enhance their teaching effectiveness and maintain high-quality CBL implementation. Facilitator selection process All facilitators were selected from the department of physiology faculty members who met the following criteria: ▪ Expertise in applied physiology: Minimum of 5 years of teaching experience in applied physiology ▪ Prior experience with interactive teaching methods including PBL or case-based teaching methods ▪ Active involvement in physiotherapy education ▪ Demonstrated interest in innovative teaching methodologies Expert-led comprehensive facilitator training program The intensive CBL facilitator training program was conducted by multidisciplinary education experts as follow: ▪ A senior professor with more than fifteen years of CBL implementation experience ▪ An educational expert specializing in medical education methodologies ▪ A curriculum development specialist Effective CBL session Aligning goals: building the learning objectives of effective CBL session The learning objectives for each CBL session were collaboratively developed by a committee of physiologists and neurologists, together with the facilitators, based on the overall course objectives and the intended outcomes (ILOs) of the neuroscience module. To ensure that students gained both theoretical knowledge and practical skills, ILOs of each CBL session were aligned with the course learning outcomes (CLOs), including knowledge and skill-based objectives related to neurophysiology. This alignment ensured that the cases addressed both theoretical understanding and practical application to clinical scenarios. These objectives were clearly defined to the facilitators after sharing the case study scenario and all relevant patient information in each CBL. The learning objectives were shared with students at the beginning of each CBL session to guide discussions and ensure alignment with the session’s goals. During the discussions, students were encouraged to ask questions and explore related topics, fostering an environment of active learning and critical thinking . CBL as an integral part of physiology course CBL sessions were integrated as a mandatory component of the applied physiology course in the 4 th semester of the undergraduate physiotherapy program. This course is part of the foundational phase of the program, which spans the first four semesters and focuses on basic and applied sciences. The entire undergraduate physiotherapy program consists of 10 semesters over five academic years, with an additional internship year comprising 36 h per week for 12 months. An overview of case-related physiology topics was delivered to students in their traditional interactive dictated lectures. These traditional lectures were used to deliver foundational knowledge in applied physiology. The CBL sessions were then implemented during the laboratory component of the course, allowing students to apply theoretical concepts from lectures to clinical case scenarios. These CBL sessions were conducted weekly and designed to provide hands-on, case-based problem-solving experiences aligned with the topics covered in the neuroscience course. Furthermore, the evaluation of students’ understanding of applied physiology, including their participation in CBL sessions, is carried out through Objective Structured Practical Examinations (OSPE). The interactive didactic lectures and practical labs were retained in the 4 th semester and were consistent with TTMs in the first semester. The frequency of interactive lectures was not reduced; instead, CBL was integrated into the weekly lab sessions, which were already part of the curriculum. CBL formatting Nine cases were selected through online search to match the specific objectives and contents of the neuroscience physiology course. Each case was thoroughly reviewed, focusing on relevance, comprehensiveness, and the potential to stimulate critical thinking and discussion among students. ILOs of each CBL case were explicitly designed to address CLOs, including knowledge and skill-based objectives (Table ). The students’ preparation for CBL sessions To ensure student preparedness, all cases- related materials and recommended reading resources were made available on Microsoft Teams at the beginning of the semester. Students were expected to review these materials, engage in pre-reading, and come to the session ready for active participation and meaningful discussion. This preparatory phase was vital for fostering effective group analysis and ensuring all students had a foundational understanding of the case content. Optimizing CBL in physiology labs: small groups’ organization and facilitators’ guidance In the physiology labs, the total number of students was organized into small groups, each comprising 25 to 30 students. These groups were further divided into approximately five smaller batches, each with six students. This setup allowed for increased interaction and personalized attention. Nine trained physiologists facilitated the CBL sessions, rotating among the different lab groups to provide students with a variety of expert insights and to ensure consistent guidance and support during the CBL activities. Each facilitator remained with a small group for the entire duration of the lab session, actively supervising and moderating discussions. By rotating across various lab groups at different time slots throughout the week, facilitators ensured that all students received consistent, high-quality guidance. This rotational approach effectively accommodated the large number of participants while preserving the small-group dynamic critical for the success of CBL implementation. CBL implementation: the Kaddoura approach in action The facilitator guided the learning process during the CBL sessions using the Kaddoura approach (Supplementary Figure 1). The Kaddoura method includes five sequential steps: case presentation, presentation of triggering questions by the facilitators, creating a comfortable and safe atmosphere for learners, active participation of all students in discussions, and finally case summarization by the facilitator. Each session started with an interactive introduction to the physiology topic. All students were encouraged to participate actively in the discussions . Tools of data collection: the data was collected through the following tools The students’ perception questionnaire To better understand the students’ acceptance and perception of the newly implemented CBL approach, they were encouraged to complete a web-based questionnaire using Google Forms. It was delivered at the end of the fourth semester to physiotherapy students who enrolled in the neuroscience course. The questionnaire was administered in English. The questionnaire consisted of two sections: the first section included demographic information, while the second section was specially designed to evaluate students’ perception with the CBL approach, as follow: Socio-demographic (SD) section of questionnaire A SD part was prepared to ask about the participants’ gender and age. The perception (P) section of questionnaire This part was composed of 20 items and aimed to gather insights on perception levels and experiences with combined CBL and TTMs, that was conducted during the fourth semester, using a five-point Likert scale, including (strongly agree, agree, neither agree nor disagree, disagree, and strongly disagree). The face validity index (FVI) of P questionnaires was evaluated to assess both clarity and comprehensibility of the P questionnaires items, achieving an overall score of 0.95. This was performed by 20 users. The students were asked to rate each item by score from 1 (not clear- not understandable item) to 4 (understandable and clear item). Scores of (3 and 4) were categorized as 1 (means clear and understandable) and scores of 1 and 2 were categorized as 0 (understandable or not clear). The FVI for each item was then computed, and the average was taken to obtain the scale’s average FVI score . Moreover, the content validity index (CVI) and content validity ratio (CVR) of the P questionnaires were estimated at 0.81 and 0.80, respectively. They were confirmed as being more than 0.60. Cronbach’s alpha of the P questionnaire was (0.9) . Finally, a question was included to rate the overall perception level of the CBL approach on a scale of (0 to 10) . The students were asked to rate each instruction approach on a five-item Likert-type scale regarding the compatibility of each statement with combined CBL and TTMs approach. A total score for each participant was obtained from the P questionnaire and divided by (20) in order to calculate a mean perception score over 5 to be applied in statistical analysis . Facilitators’ feedback questionnaire The Facilitators’ feedback questionnaire had both open-ended and structured questions. The open-ended questions were developed to help in the better CBL implementation in the future through collection of facilitators’ insights on challenges and recommendations for improving CBL implementation. While the close-ended questions were based on nine parameters and scored using a five-point Likert scale ranging from strongly disagreeing to strongly agreeing . The facilitators’ feedback questionnaire was validated through a pilot test with a small group of facilitators and experts in the field, achieving FVI score of 0.92, indicating high face validity. Additionally, the CVI and CVR were estimated at 0.85 and 0.79, respectively, indicating good content validity of the feedback questionnaire. Indicators for academic achievement To compare academic performance between TTMs alone and the combined case-based and traditional education in applied physiology, examination scores obtained by physiotherapy students at the end of the first and fourth semesters were used as benchmarks. Statistical methods All data were tabulated and analyzed by the statistical package for the social sciences software program, IBM SPSS Statistics for Windows, version 26 (IBM Corp., Armonk, N.Y., USA). The total perception score was calculated by giving 5 to strongly agree, 4 to agree, 3 to neutral, 2 to disagree, and 1 to strongly disagree responses. A perception score of 3 or above on the 5-point Likert scale was considered positive, while scores below 3 were considered negative. Categorical data were represented as frequencies and percentages. Possible associations between categorical variables were analyzed using Pearson’s Chi-Square test or Fisher’s exact tests as appropriate. Continuous variables were reported as median with interquartile range (IQR: 25 th – 75 th percentiles) and were compared using the Mann–Whitney U test because they were not normally distributed. Furthermore, Wilcoxon-signed-rank test was applied to compare the student’s grades before and after CBL method intervention. Open-ended responses from the facilitators’ feedback questionnaire were analyzed using thematic analysis. Data familiarization was followed by coding to identify recurring patterns and unique insights. Thematic analysis was conducted to group related codes into broader themes, which were categorized under relevant domains. A p- value of < 0.05 was considered statistically significant. This is an interventional study that was conducted at the Faculty of Physiotherapy, AlSalam University, during the period of January to May 2023, on the undergraduate physiotherapy students during the neuroscience course. This study was conducted in accordance with the Declaration of Helsinki. Study participants and eligibility A cohort comprising 244 undergraduate physiotherapy students in their fourth semester, who were enrolled in a neuroscience course were recruited for the present study. There were no exclusion criteria. A cohort comprising 244 undergraduate physiotherapy students in their fourth semester, who were enrolled in a neuroscience course were recruited for the present study. There were no exclusion criteria. The study included all fourth-semester students ( n = 244), with an expected dropout rate of 5%. A formal sample size calculation was not performed, as the objective was to include all eligible students. Nine volunteer physiologists with expertise in applied physiology served as facilitators for the CBL sessions. Most had prior experience with interactive teaching methodologies, including problem-based learning (PBL) and small-group discussions. Facilitators training To ensure effective CBL implementation, the facilitators underwent a comprehensive orientation and training program selectively tailored for applied physiology teaching through CBL. The training program spanned over two weeks and covered the following key dimensions: Theoretical training and orientation sessions Included an in-depth understanding of the CBL methodology. The training focused on clarification of the CBL principles in applied physiology. They received guidance on their roles and responsibilities through CBL sessions, including facilitating knowledge delivery through case scenarios and discussions. They were encouraged to act as guides rather than teachers. The facilitators were trained to actively monitor group dynamics, check their progress, supervise their discussions, encourage the active participation of students, and provide guidance as needed. Facilitators were trained for the critical analysis of clinical cases, formulating relevant questions that trigger critical thinking, and how to guide discussions to maximize students’ learning. Practical training Hands-on workshops simulating CBL sessions were arranged, where facilitators practiced moderating discussions, monitoring group dynamics, and guiding students through case analyses. Additionally, facilitators have received training in formulating relevant questions that trigger critical thinking and guide discussions. Assessment of preparedness Facilitators participated in mock sessions, where their skills in engaging students, encouraging participation, and managing discussions were evaluated and refined. Ongoing professional support for facilitators Throughout the semester, facilitators benefited from a comprehensive support system that included regular mentoring sessions and weekly debriefing meetings to address emerging challenges. A robust peer support network was established among facilitators, fostering collaborative problem-solving and shared learning experiences. Additionally, facilitators had continuous access to curated educational resources and materials to enhance their teaching effectiveness and maintain high-quality CBL implementation. Facilitator selection process All facilitators were selected from the department of physiology faculty members who met the following criteria: ▪ Expertise in applied physiology: Minimum of 5 years of teaching experience in applied physiology ▪ Prior experience with interactive teaching methods including PBL or case-based teaching methods ▪ Active involvement in physiotherapy education ▪ Demonstrated interest in innovative teaching methodologies Expert-led comprehensive facilitator training program The intensive CBL facilitator training program was conducted by multidisciplinary education experts as follow: ▪ A senior professor with more than fifteen years of CBL implementation experience ▪ An educational expert specializing in medical education methodologies ▪ A curriculum development specialist To ensure effective CBL implementation, the facilitators underwent a comprehensive orientation and training program selectively tailored for applied physiology teaching through CBL. The training program spanned over two weeks and covered the following key dimensions: Theoretical training and orientation sessions Included an in-depth understanding of the CBL methodology. The training focused on clarification of the CBL principles in applied physiology. They received guidance on their roles and responsibilities through CBL sessions, including facilitating knowledge delivery through case scenarios and discussions. They were encouraged to act as guides rather than teachers. The facilitators were trained to actively monitor group dynamics, check their progress, supervise their discussions, encourage the active participation of students, and provide guidance as needed. Facilitators were trained for the critical analysis of clinical cases, formulating relevant questions that trigger critical thinking, and how to guide discussions to maximize students’ learning. Practical training Hands-on workshops simulating CBL sessions were arranged, where facilitators practiced moderating discussions, monitoring group dynamics, and guiding students through case analyses. Additionally, facilitators have received training in formulating relevant questions that trigger critical thinking and guide discussions. Assessment of preparedness Facilitators participated in mock sessions, where their skills in engaging students, encouraging participation, and managing discussions were evaluated and refined. Ongoing professional support for facilitators Throughout the semester, facilitators benefited from a comprehensive support system that included regular mentoring sessions and weekly debriefing meetings to address emerging challenges. A robust peer support network was established among facilitators, fostering collaborative problem-solving and shared learning experiences. Additionally, facilitators had continuous access to curated educational resources and materials to enhance their teaching effectiveness and maintain high-quality CBL implementation. Facilitator selection process All facilitators were selected from the department of physiology faculty members who met the following criteria: ▪ Expertise in applied physiology: Minimum of 5 years of teaching experience in applied physiology ▪ Prior experience with interactive teaching methods including PBL or case-based teaching methods ▪ Active involvement in physiotherapy education ▪ Demonstrated interest in innovative teaching methodologies Expert-led comprehensive facilitator training program The intensive CBL facilitator training program was conducted by multidisciplinary education experts as follow: ▪ A senior professor with more than fifteen years of CBL implementation experience ▪ An educational expert specializing in medical education methodologies ▪ A curriculum development specialist Included an in-depth understanding of the CBL methodology. The training focused on clarification of the CBL principles in applied physiology. They received guidance on their roles and responsibilities through CBL sessions, including facilitating knowledge delivery through case scenarios and discussions. They were encouraged to act as guides rather than teachers. The facilitators were trained to actively monitor group dynamics, check their progress, supervise their discussions, encourage the active participation of students, and provide guidance as needed. Facilitators were trained for the critical analysis of clinical cases, formulating relevant questions that trigger critical thinking, and how to guide discussions to maximize students’ learning. Hands-on workshops simulating CBL sessions were arranged, where facilitators practiced moderating discussions, monitoring group dynamics, and guiding students through case analyses. Additionally, facilitators have received training in formulating relevant questions that trigger critical thinking and guide discussions. Facilitators participated in mock sessions, where their skills in engaging students, encouraging participation, and managing discussions were evaluated and refined. Throughout the semester, facilitators benefited from a comprehensive support system that included regular mentoring sessions and weekly debriefing meetings to address emerging challenges. A robust peer support network was established among facilitators, fostering collaborative problem-solving and shared learning experiences. Additionally, facilitators had continuous access to curated educational resources and materials to enhance their teaching effectiveness and maintain high-quality CBL implementation. All facilitators were selected from the department of physiology faculty members who met the following criteria: ▪ Expertise in applied physiology: Minimum of 5 years of teaching experience in applied physiology ▪ Prior experience with interactive teaching methods including PBL or case-based teaching methods ▪ Active involvement in physiotherapy education ▪ Demonstrated interest in innovative teaching methodologies The intensive CBL facilitator training program was conducted by multidisciplinary education experts as follow: ▪ A senior professor with more than fifteen years of CBL implementation experience ▪ An educational expert specializing in medical education methodologies ▪ A curriculum development specialist Aligning goals: building the learning objectives of effective CBL session The learning objectives for each CBL session were collaboratively developed by a committee of physiologists and neurologists, together with the facilitators, based on the overall course objectives and the intended outcomes (ILOs) of the neuroscience module. To ensure that students gained both theoretical knowledge and practical skills, ILOs of each CBL session were aligned with the course learning outcomes (CLOs), including knowledge and skill-based objectives related to neurophysiology. This alignment ensured that the cases addressed both theoretical understanding and practical application to clinical scenarios. These objectives were clearly defined to the facilitators after sharing the case study scenario and all relevant patient information in each CBL. The learning objectives were shared with students at the beginning of each CBL session to guide discussions and ensure alignment with the session’s goals. During the discussions, students were encouraged to ask questions and explore related topics, fostering an environment of active learning and critical thinking . CBL as an integral part of physiology course CBL sessions were integrated as a mandatory component of the applied physiology course in the 4 th semester of the undergraduate physiotherapy program. This course is part of the foundational phase of the program, which spans the first four semesters and focuses on basic and applied sciences. The entire undergraduate physiotherapy program consists of 10 semesters over five academic years, with an additional internship year comprising 36 h per week for 12 months. An overview of case-related physiology topics was delivered to students in their traditional interactive dictated lectures. These traditional lectures were used to deliver foundational knowledge in applied physiology. The CBL sessions were then implemented during the laboratory component of the course, allowing students to apply theoretical concepts from lectures to clinical case scenarios. These CBL sessions were conducted weekly and designed to provide hands-on, case-based problem-solving experiences aligned with the topics covered in the neuroscience course. Furthermore, the evaluation of students’ understanding of applied physiology, including their participation in CBL sessions, is carried out through Objective Structured Practical Examinations (OSPE). The interactive didactic lectures and practical labs were retained in the 4 th semester and were consistent with TTMs in the first semester. The frequency of interactive lectures was not reduced; instead, CBL was integrated into the weekly lab sessions, which were already part of the curriculum. CBL formatting Nine cases were selected through online search to match the specific objectives and contents of the neuroscience physiology course. Each case was thoroughly reviewed, focusing on relevance, comprehensiveness, and the potential to stimulate critical thinking and discussion among students. ILOs of each CBL case were explicitly designed to address CLOs, including knowledge and skill-based objectives (Table ). The students’ preparation for CBL sessions To ensure student preparedness, all cases- related materials and recommended reading resources were made available on Microsoft Teams at the beginning of the semester. Students were expected to review these materials, engage in pre-reading, and come to the session ready for active participation and meaningful discussion. This preparatory phase was vital for fostering effective group analysis and ensuring all students had a foundational understanding of the case content. Optimizing CBL in physiology labs: small groups’ organization and facilitators’ guidance In the physiology labs, the total number of students was organized into small groups, each comprising 25 to 30 students. These groups were further divided into approximately five smaller batches, each with six students. This setup allowed for increased interaction and personalized attention. Nine trained physiologists facilitated the CBL sessions, rotating among the different lab groups to provide students with a variety of expert insights and to ensure consistent guidance and support during the CBL activities. Each facilitator remained with a small group for the entire duration of the lab session, actively supervising and moderating discussions. By rotating across various lab groups at different time slots throughout the week, facilitators ensured that all students received consistent, high-quality guidance. This rotational approach effectively accommodated the large number of participants while preserving the small-group dynamic critical for the success of CBL implementation. CBL implementation: the Kaddoura approach in action The facilitator guided the learning process during the CBL sessions using the Kaddoura approach (Supplementary Figure 1). The Kaddoura method includes five sequential steps: case presentation, presentation of triggering questions by the facilitators, creating a comfortable and safe atmosphere for learners, active participation of all students in discussions, and finally case summarization by the facilitator. Each session started with an interactive introduction to the physiology topic. All students were encouraged to participate actively in the discussions . The learning objectives for each CBL session were collaboratively developed by a committee of physiologists and neurologists, together with the facilitators, based on the overall course objectives and the intended outcomes (ILOs) of the neuroscience module. To ensure that students gained both theoretical knowledge and practical skills, ILOs of each CBL session were aligned with the course learning outcomes (CLOs), including knowledge and skill-based objectives related to neurophysiology. This alignment ensured that the cases addressed both theoretical understanding and practical application to clinical scenarios. These objectives were clearly defined to the facilitators after sharing the case study scenario and all relevant patient information in each CBL. The learning objectives were shared with students at the beginning of each CBL session to guide discussions and ensure alignment with the session’s goals. During the discussions, students were encouraged to ask questions and explore related topics, fostering an environment of active learning and critical thinking . CBL sessions were integrated as a mandatory component of the applied physiology course in the 4 th semester of the undergraduate physiotherapy program. This course is part of the foundational phase of the program, which spans the first four semesters and focuses on basic and applied sciences. The entire undergraduate physiotherapy program consists of 10 semesters over five academic years, with an additional internship year comprising 36 h per week for 12 months. An overview of case-related physiology topics was delivered to students in their traditional interactive dictated lectures. These traditional lectures were used to deliver foundational knowledge in applied physiology. The CBL sessions were then implemented during the laboratory component of the course, allowing students to apply theoretical concepts from lectures to clinical case scenarios. These CBL sessions were conducted weekly and designed to provide hands-on, case-based problem-solving experiences aligned with the topics covered in the neuroscience course. Furthermore, the evaluation of students’ understanding of applied physiology, including their participation in CBL sessions, is carried out through Objective Structured Practical Examinations (OSPE). The interactive didactic lectures and practical labs were retained in the 4 th semester and were consistent with TTMs in the first semester. The frequency of interactive lectures was not reduced; instead, CBL was integrated into the weekly lab sessions, which were already part of the curriculum. Nine cases were selected through online search to match the specific objectives and contents of the neuroscience physiology course. Each case was thoroughly reviewed, focusing on relevance, comprehensiveness, and the potential to stimulate critical thinking and discussion among students. ILOs of each CBL case were explicitly designed to address CLOs, including knowledge and skill-based objectives (Table ). To ensure student preparedness, all cases- related materials and recommended reading resources were made available on Microsoft Teams at the beginning of the semester. Students were expected to review these materials, engage in pre-reading, and come to the session ready for active participation and meaningful discussion. This preparatory phase was vital for fostering effective group analysis and ensuring all students had a foundational understanding of the case content. In the physiology labs, the total number of students was organized into small groups, each comprising 25 to 30 students. These groups were further divided into approximately five smaller batches, each with six students. This setup allowed for increased interaction and personalized attention. Nine trained physiologists facilitated the CBL sessions, rotating among the different lab groups to provide students with a variety of expert insights and to ensure consistent guidance and support during the CBL activities. Each facilitator remained with a small group for the entire duration of the lab session, actively supervising and moderating discussions. By rotating across various lab groups at different time slots throughout the week, facilitators ensured that all students received consistent, high-quality guidance. This rotational approach effectively accommodated the large number of participants while preserving the small-group dynamic critical for the success of CBL implementation. The facilitator guided the learning process during the CBL sessions using the Kaddoura approach (Supplementary Figure 1). The Kaddoura method includes five sequential steps: case presentation, presentation of triggering questions by the facilitators, creating a comfortable and safe atmosphere for learners, active participation of all students in discussions, and finally case summarization by the facilitator. Each session started with an interactive introduction to the physiology topic. All students were encouraged to participate actively in the discussions . The students’ perception questionnaire To better understand the students’ acceptance and perception of the newly implemented CBL approach, they were encouraged to complete a web-based questionnaire using Google Forms. It was delivered at the end of the fourth semester to physiotherapy students who enrolled in the neuroscience course. The questionnaire was administered in English. The questionnaire consisted of two sections: the first section included demographic information, while the second section was specially designed to evaluate students’ perception with the CBL approach, as follow: Socio-demographic (SD) section of questionnaire A SD part was prepared to ask about the participants’ gender and age. The perception (P) section of questionnaire This part was composed of 20 items and aimed to gather insights on perception levels and experiences with combined CBL and TTMs, that was conducted during the fourth semester, using a five-point Likert scale, including (strongly agree, agree, neither agree nor disagree, disagree, and strongly disagree). The face validity index (FVI) of P questionnaires was evaluated to assess both clarity and comprehensibility of the P questionnaires items, achieving an overall score of 0.95. This was performed by 20 users. The students were asked to rate each item by score from 1 (not clear- not understandable item) to 4 (understandable and clear item). Scores of (3 and 4) were categorized as 1 (means clear and understandable) and scores of 1 and 2 were categorized as 0 (understandable or not clear). The FVI for each item was then computed, and the average was taken to obtain the scale’s average FVI score . Moreover, the content validity index (CVI) and content validity ratio (CVR) of the P questionnaires were estimated at 0.81 and 0.80, respectively. They were confirmed as being more than 0.60. Cronbach’s alpha of the P questionnaire was (0.9) . Finally, a question was included to rate the overall perception level of the CBL approach on a scale of (0 to 10) . The students were asked to rate each instruction approach on a five-item Likert-type scale regarding the compatibility of each statement with combined CBL and TTMs approach. A total score for each participant was obtained from the P questionnaire and divided by (20) in order to calculate a mean perception score over 5 to be applied in statistical analysis . Facilitators’ feedback questionnaire The Facilitators’ feedback questionnaire had both open-ended and structured questions. The open-ended questions were developed to help in the better CBL implementation in the future through collection of facilitators’ insights on challenges and recommendations for improving CBL implementation. While the close-ended questions were based on nine parameters and scored using a five-point Likert scale ranging from strongly disagreeing to strongly agreeing . The facilitators’ feedback questionnaire was validated through a pilot test with a small group of facilitators and experts in the field, achieving FVI score of 0.92, indicating high face validity. Additionally, the CVI and CVR were estimated at 0.85 and 0.79, respectively, indicating good content validity of the feedback questionnaire. Indicators for academic achievement To compare academic performance between TTMs alone and the combined case-based and traditional education in applied physiology, examination scores obtained by physiotherapy students at the end of the first and fourth semesters were used as benchmarks. To better understand the students’ acceptance and perception of the newly implemented CBL approach, they were encouraged to complete a web-based questionnaire using Google Forms. It was delivered at the end of the fourth semester to physiotherapy students who enrolled in the neuroscience course. The questionnaire was administered in English. The questionnaire consisted of two sections: the first section included demographic information, while the second section was specially designed to evaluate students’ perception with the CBL approach, as follow: Socio-demographic (SD) section of questionnaire A SD part was prepared to ask about the participants’ gender and age. The perception (P) section of questionnaire This part was composed of 20 items and aimed to gather insights on perception levels and experiences with combined CBL and TTMs, that was conducted during the fourth semester, using a five-point Likert scale, including (strongly agree, agree, neither agree nor disagree, disagree, and strongly disagree). The face validity index (FVI) of P questionnaires was evaluated to assess both clarity and comprehensibility of the P questionnaires items, achieving an overall score of 0.95. This was performed by 20 users. The students were asked to rate each item by score from 1 (not clear- not understandable item) to 4 (understandable and clear item). Scores of (3 and 4) were categorized as 1 (means clear and understandable) and scores of 1 and 2 were categorized as 0 (understandable or not clear). The FVI for each item was then computed, and the average was taken to obtain the scale’s average FVI score . Moreover, the content validity index (CVI) and content validity ratio (CVR) of the P questionnaires were estimated at 0.81 and 0.80, respectively. They were confirmed as being more than 0.60. Cronbach’s alpha of the P questionnaire was (0.9) . Finally, a question was included to rate the overall perception level of the CBL approach on a scale of (0 to 10) . The students were asked to rate each instruction approach on a five-item Likert-type scale regarding the compatibility of each statement with combined CBL and TTMs approach. A total score for each participant was obtained from the P questionnaire and divided by (20) in order to calculate a mean perception score over 5 to be applied in statistical analysis . A SD part was prepared to ask about the participants’ gender and age. This part was composed of 20 items and aimed to gather insights on perception levels and experiences with combined CBL and TTMs, that was conducted during the fourth semester, using a five-point Likert scale, including (strongly agree, agree, neither agree nor disagree, disagree, and strongly disagree). The face validity index (FVI) of P questionnaires was evaluated to assess both clarity and comprehensibility of the P questionnaires items, achieving an overall score of 0.95. This was performed by 20 users. The students were asked to rate each item by score from 1 (not clear- not understandable item) to 4 (understandable and clear item). Scores of (3 and 4) were categorized as 1 (means clear and understandable) and scores of 1 and 2 were categorized as 0 (understandable or not clear). The FVI for each item was then computed, and the average was taken to obtain the scale’s average FVI score . Moreover, the content validity index (CVI) and content validity ratio (CVR) of the P questionnaires were estimated at 0.81 and 0.80, respectively. They were confirmed as being more than 0.60. Cronbach’s alpha of the P questionnaire was (0.9) . Finally, a question was included to rate the overall perception level of the CBL approach on a scale of (0 to 10) . The students were asked to rate each instruction approach on a five-item Likert-type scale regarding the compatibility of each statement with combined CBL and TTMs approach. A total score for each participant was obtained from the P questionnaire and divided by (20) in order to calculate a mean perception score over 5 to be applied in statistical analysis . The Facilitators’ feedback questionnaire had both open-ended and structured questions. The open-ended questions were developed to help in the better CBL implementation in the future through collection of facilitators’ insights on challenges and recommendations for improving CBL implementation. While the close-ended questions were based on nine parameters and scored using a five-point Likert scale ranging from strongly disagreeing to strongly agreeing . The facilitators’ feedback questionnaire was validated through a pilot test with a small group of facilitators and experts in the field, achieving FVI score of 0.92, indicating high face validity. Additionally, the CVI and CVR were estimated at 0.85 and 0.79, respectively, indicating good content validity of the feedback questionnaire. To compare academic performance between TTMs alone and the combined case-based and traditional education in applied physiology, examination scores obtained by physiotherapy students at the end of the first and fourth semesters were used as benchmarks. All data were tabulated and analyzed by the statistical package for the social sciences software program, IBM SPSS Statistics for Windows, version 26 (IBM Corp., Armonk, N.Y., USA). The total perception score was calculated by giving 5 to strongly agree, 4 to agree, 3 to neutral, 2 to disagree, and 1 to strongly disagree responses. A perception score of 3 or above on the 5-point Likert scale was considered positive, while scores below 3 were considered negative. Categorical data were represented as frequencies and percentages. Possible associations between categorical variables were analyzed using Pearson’s Chi-Square test or Fisher’s exact tests as appropriate. Continuous variables were reported as median with interquartile range (IQR: 25 th – 75 th percentiles) and were compared using the Mann–Whitney U test because they were not normally distributed. Furthermore, Wilcoxon-signed-rank test was applied to compare the student’s grades before and after CBL method intervention. Open-ended responses from the facilitators’ feedback questionnaire were analyzed using thematic analysis. Data familiarization was followed by coding to identify recurring patterns and unique insights. Thematic analysis was conducted to group related codes into broader themes, which were categorized under relevant domains. A p- value of < 0.05 was considered statistically significant. Demographic data Two hundred thirty-eight out of 244 undergraduate physiotherapy students completed the survey following the combined CBL and TTMs approach during the neuroscience course in the fourth semester. 72.3% of participants were female, compared to 27.7% male. Of whom, 90.80% were 18–20 years old, while 9.2% were 21–23 years old (Figure S2). Student’s perception level with the different features of hybrid learning “ combined CBL with TTMs” (Table ) From the perspective of the enrolled students, the most significant advantages of the integrating CBL with TTMs as a hybrid teaching tool, compared to TTMs alone, were its ability to comprehensively cover course objectives and effectively evaluate students’ knowledge (98.3%). Additionally, most participants (97.9%) believed that the CBL approach allowed for greater engagement through more questions, and 97.5% stated that CBL felt closer to real-life scenarios. A majority of students (97.1%) described CBL as a motivating, efficient, and engaging teaching tool in clinical practice that effectively summarizes content. Additionally, 96.6% of participants stated that CBL enhances self-confidence and reduces class monotony. Among the 238 undergraduate students, 229 highlighted that CBL significantly facilitates learning, promotes deep thinking, and improves retention of topics. Furthermore, 95.8% noted that CBL organizes information well and supports comprehension, while 95.4% believed it enhances visualization skills. Cooperation and participation were also cited as benefits by 93.7%, and 93.3% ensured that CBL is a more practical tool. Association between gender and perception level regarding individual features of incorporating CBL to the traditional learning methods (Table ) Although female students constituted the majority of the participants (72.3%) compared to males (27.7%), no significant gender-based differences were observed in perception levels regarding the various features of CBL, as indicated by each question ( P > 0.05 ). Overall perception level with the incorporation of CBL into the traditional teaching framework applied physiology and its association with genders and age groups Overall, 232 students (97.5%) affirmed that the combined CBL with TTMs approach is superior to TTMs alone (Table ). Figure S3 illustrates the overall perception levels of enrolled students after integrating CBL with TTMs in teaching applied physiology during the neuroscience course for undergraduate physiotherapy students. The perception scores ranged from 25.0 to 100.0, with a median of 99.0 and an IQR of 88.0–100.0. Table indicates a significantly higher perception level among female participants compared to males ( P < 0.05 ), with a median perception score of 100.0 for females compared to 96.5 for males. Similarly, the mean rank of the total perception score was 124.65 for females and 106.08 for males. In contrast, no significant differences ( P > 0.05 ) were observed in total perception scores across different age groups. The median and mean rank values were 99.0 and 121.52, respectively, for the 18–20 age group, compared to 96.5 and 99.68 for the 21–23 age group. Student’s academic achievement Integrating CBL with TTMs in teaching applied physiology was apparently associated with better academic achievement. Although the maximum grades were the same between the two methods of teaching (10), the minimum grade was 2 out of 10 when applied physiology was taught through the TTMs in the first semester, compared to a minimum grade of 7.5 out of 10.0 with the integration of CBL with TTMs in the fourth semester. Similarly, the median grade was 8.5 with TTMs alone compared to 10 when CBL was combined with traditional methods. The better achievement of the students was also apparent at both the 25 th and 75 th percentiles, where the hybrid approach integrating CBL with TTMs scored 10.0 and 10.0 at both the 25 th and 75 th percentiles, respectively, compared to 7.0 and 9.5 in TTMs (Table ). Facilitators’ feedback CBL was well received by the facilitators, with 85% agreeing that combining CBL with TTMs enhances students’ communication skills, fosters a better and safer relationship between facilitators and students, and helps in understanding group dynamics. Additionally, 80% of the facilitators supported incorporating CBL into the timetable as a regular learning tool along with TTMs (Figure S4). A substantial proportion (80%) of the facilitators believed that integrating CBL with TTMs is a better teaching strategy, as it enhances students’ problem-solving and self-directed learning abilities while facilitating the integration of knowledge across different subjects. Additionally, 80% of facilitators emphasized that this approach supports better knowledge retention among students (Figure S4). The majority also agreed that combining CBL with TTMs represents a concerted effort to bridge existing gaps between physiologists and students in a clinical context. While most facilitators welcomed CBL as a complementary tool to TTMs, aiding in the achievement of objectives, some challenges were identified by the facilitators and are summarized in Table . Two hundred thirty-eight out of 244 undergraduate physiotherapy students completed the survey following the combined CBL and TTMs approach during the neuroscience course in the fourth semester. 72.3% of participants were female, compared to 27.7% male. Of whom, 90.80% were 18–20 years old, while 9.2% were 21–23 years old (Figure S2). ) From the perspective of the enrolled students, the most significant advantages of the integrating CBL with TTMs as a hybrid teaching tool, compared to TTMs alone, were its ability to comprehensively cover course objectives and effectively evaluate students’ knowledge (98.3%). Additionally, most participants (97.9%) believed that the CBL approach allowed for greater engagement through more questions, and 97.5% stated that CBL felt closer to real-life scenarios. A majority of students (97.1%) described CBL as a motivating, efficient, and engaging teaching tool in clinical practice that effectively summarizes content. Additionally, 96.6% of participants stated that CBL enhances self-confidence and reduces class monotony. Among the 238 undergraduate students, 229 highlighted that CBL significantly facilitates learning, promotes deep thinking, and improves retention of topics. Furthermore, 95.8% noted that CBL organizes information well and supports comprehension, while 95.4% believed it enhances visualization skills. Cooperation and participation were also cited as benefits by 93.7%, and 93.3% ensured that CBL is a more practical tool. ) Although female students constituted the majority of the participants (72.3%) compared to males (27.7%), no significant gender-based differences were observed in perception levels regarding the various features of CBL, as indicated by each question ( P > 0.05 ). Overall, 232 students (97.5%) affirmed that the combined CBL with TTMs approach is superior to TTMs alone (Table ). Figure S3 illustrates the overall perception levels of enrolled students after integrating CBL with TTMs in teaching applied physiology during the neuroscience course for undergraduate physiotherapy students. The perception scores ranged from 25.0 to 100.0, with a median of 99.0 and an IQR of 88.0–100.0. Table indicates a significantly higher perception level among female participants compared to males ( P < 0.05 ), with a median perception score of 100.0 for females compared to 96.5 for males. Similarly, the mean rank of the total perception score was 124.65 for females and 106.08 for males. In contrast, no significant differences ( P > 0.05 ) were observed in total perception scores across different age groups. The median and mean rank values were 99.0 and 121.52, respectively, for the 18–20 age group, compared to 96.5 and 99.68 for the 21–23 age group. Integrating CBL with TTMs in teaching applied physiology was apparently associated with better academic achievement. Although the maximum grades were the same between the two methods of teaching (10), the minimum grade was 2 out of 10 when applied physiology was taught through the TTMs in the first semester, compared to a minimum grade of 7.5 out of 10.0 with the integration of CBL with TTMs in the fourth semester. Similarly, the median grade was 8.5 with TTMs alone compared to 10 when CBL was combined with traditional methods. The better achievement of the students was also apparent at both the 25 th and 75 th percentiles, where the hybrid approach integrating CBL with TTMs scored 10.0 and 10.0 at both the 25 th and 75 th percentiles, respectively, compared to 7.0 and 9.5 in TTMs (Table ). CBL was well received by the facilitators, with 85% agreeing that combining CBL with TTMs enhances students’ communication skills, fosters a better and safer relationship between facilitators and students, and helps in understanding group dynamics. Additionally, 80% of the facilitators supported incorporating CBL into the timetable as a regular learning tool along with TTMs (Figure S4). A substantial proportion (80%) of the facilitators believed that integrating CBL with TTMs is a better teaching strategy, as it enhances students’ problem-solving and self-directed learning abilities while facilitating the integration of knowledge across different subjects. Additionally, 80% of facilitators emphasized that this approach supports better knowledge retention among students (Figure S4). The majority also agreed that combining CBL with TTMs represents a concerted effort to bridge existing gaps between physiologists and students in a clinical context. While most facilitators welcomed CBL as a complementary tool to TTMs, aiding in the achievement of objectives, some challenges were identified by the facilitators and are summarized in Table . Despite the benefits of TTMs as a learning approach in undergraduate medical teaching, they have been questioned due to a lack of reasoning skills and critical thinking , which are key elements in any physiotherapy career. CBL offers a dynamic and interactive, learning strategy that integrates both guided and structured learning to cultivate these crucial abilities . In the current study, CBL was concomitantly applied with TTMs in applied physiology to undergraduate physiotherapy students encourages the active students’ learning and yields a more productive outcome as an exploratory step that reflect the potential benefits of CBL in fostering engagement and comprehension. By discussing different real clinical case scenarios related to the topics taught in the neuroscience module, physiotherapy students evaluated their own perception level while their academic achievement was assessed through their grades. The higher overall perception level observed with integration of CBL within TTMs in the current study underscores the importance of CBL in preparing students for the clinical demands of their medical careers while fostering essential skills critical for their performance and competency . The analysis of student perceptions demonstrates strong support for integrating CBL with TTMs across all 20 measured items. Remarkably, 97.5% of students expressed a positive perception of the combined approach compared to TTMs alone, with an overall perception score of 99.0 out of 100. Students particularly appreciated CBL’s ability to facilitate deeper thinking and its effectiveness in reducing class monotony. These findings suggest that CBL when integrated with TTMs may be particularly beneficial for maintaining student interest and motivation throughout the course. The high level of student perception with CBL’s ability to organize information and facilitate assessment indicates that this method may also improve students’ learning outcomes. Furthermore, the perception that the combined learning approach increases students’ self-confidence aligns with previous research suggesting that active learning approaches can enhance student confidence . Interestingly, students perceived the combination of CBL and TTMs as more efficient for clinical practice. This suggests that integrating CBL with TTMs may offer benefits beyond academic settings, potentially enhancing clinical decision-making skills . The positive perceptions observed in this study can be attributed to the student-centered nature of CBL, which fosters active participation, critical thinking, and real-world applicability . These features address many limitations of TTMs, such as passive learning and lack of engagement . Additionally, the structured design of CBL cases, which align closely with course objectives, ensures clarity and focus in learning . The high students’ perception with CBL in teaching physiology was reported previously, where students clearly enjoyed their experience with CBL, perceived it as valuable, and gave an overall rating of the CBL program as good—excellent on a five-point Likert Scale . The current findings are consistent with the preliminary results reported by Brown et al. (2012) , who implemented a CBL approach for undergraduate health sciences students at the University of Ottawa. In their pilot project, 144 students participated and achieved an average score of 4.13 out of 5 on a quiz designed to evaluate their mastery of the concepts covered in the CBL sessions. Furthermore, the students rated the overall learning benefit of the program as 3.82 out of 4 on a nominal scale, highlighting its perceived educational value . These results reinforce the potential of CBL to significantly enhance the learning experience for undergraduate students, especially when combined with TTMs. The positive findings from the current study are further bolstered by the recent work of Saini et al. (2024), who highlighted CBL, as a student-centric, self-directed learning approach that fosters collaboration and critical thinking among 134 final-year physiotherapy, medical, and nursing students . Using pre- and post-test assessments, they reported significantly higher post-test scores following the CBL approach, indicating higher knowledge acquisition ( p < 0.05 ) . Students also reported enhanced learning experiences, highlighting the role of CBL in consolidating and integrating knowledge, and applying learned concepts to real-world scenarios. To investigate potential gender-based differences in the effectiveness of CBL integration with TTMs, we assessed the correlation between gender and perceptions. Our findings showed no significant gender-based differences in perception levels regarding the various features of CBL (Table ). These results are consistent with previous studies that also reported no significant gender differences in the perception of active learning methods . In contrast, other educational research has highlighted the presence of gender-based differences in perceptions across broader educational contexts . CBL emphasized on the active role of the students in creating their own knowledge (discovery learning) in CBL while engaging with the designed clinical cases and building their own understanding of medical procedures and concepts . It has been stated that CBL provides better opportunities for students to formulate diagnoses and delineate the appropriate management solutions as well as to relate the underlying possible mechanisms to the proposed diagnosis and treatment . CBL was reported by students to enhance their motivation and interest in learning, based on their experiences during the course. Interestingly, CBL-related student-centered features, such as self-confidence, learning, and critical thinking, were all higher with CBL (Table ). These findings support the idea that these skills could be improved through CBL. Additionally, the implementation of CBL as a complement to TTMs in applied physiology was linked to more cooperation between students with better teacher-student relationship, creating trusting and collaborative classroom culture. Prior lines of evidence pointed out that CBL encouraged students to share their knowledge during discussion and resolve the clinical cases . Indeed, inter-peer collaboration and interaction are considered fundamental skills required to efficiently work in multidisciplinary clinical teams. The displayed results in this study are consistent with the prior results that scored CBL as an effective learning tool in developing critical thinking skills, allowing students to link what they have learned with real-world scenarios . Combined CBL with TLMs application was associated with significantly promoted academic performance (Table ). In alignment with our results, the application of CBL in teaching endocrine physiology was associated with enhanced students’ learning with better knowledge assimilation. The improved retention of knowledge with CBL could be attributed to the fact that students are required to study simultaneously the same topic from all subjects and integrate the knowledge to proceed to decision-making for the given problem in the case scenario . The facilitators believed that integrating CBL to TTMs would better prepare physiotherapy students for future clinical practice by challenging them with realistic clinical cases. Additionally, they speculated that students are expected to collaboratively apply their previously acquired theoretical knowledge to gradually make appropriate decisions and propose solutions to the assigned clinical scenarios while identifying the key relevant characteristics. The facilitators exhibited a relatively low effort in providing detailed information about the clinical case while stimulating a gradual discussion among students. The reported positive feedback by the facilitator in the current study matched that previously recorded in CBL implementation studies . The facilitators reported that students became more actively engaged and collaborative during the discussion of clinical cases in CBL sessions. This observation was supported by the thematic analysis of open-ended questions. The qualitative feedback highlighted that students were more motivated to participate, ask questions, and share their insights during the discussion of clinical cases. This could be explained by the way CBL implementation depends on open questions, which leaves the students more confident and promotes their participation in the clinical discussion . This study brings several unique strengths to the field of physiotherapy education. It is the first to integrate CBL along with TTMs in applied physiology specifically for physiotherapy students, providing novel insights into the effectiveness of this combined approach in this context. The present study focuses on neurophysiology, a critical area in physiotherapy, enhancing students’ skills in managing clinical cases with a neurological basis. Secondly, the careful formulation of course objectives and selection of real and previously published cases by a committee of physiologists and neurologists, in consultation with students, ensured that the CBL approach was tailored to the specific needs and interests of the target students. Additionally, the relatively large sample size ( n = 238) and high response rate (97.5%) of the student survey provide robust evidence for the effectiveness of integrating CBL with TTMs. The structured weekly implementation in lab sessions with OSPE-based assessment underscores a replicable and adaptable model, presenting a practical framework for other institutions aiming to enhance applied physiology education. The current study has some limitations, including involvement of one cohort, convenience sampling and the single-institution setting, which may limit generalizability of the findings. Although this study primarily focused on short-term outcomes at the end of the semester, the strong academic performance observed provides a foundation for hypothesizing that the active learning environment created by CBL can support long-term knowledge retention. To address this limitation, future research could include longitudinal assessments, such as follow-up exams or evaluations of clinical performance to measure the durability of knowledge retention. Additionally, incorporating spaced repetition or periodic review sessions into the CBL framework may further enhance long-term retention. Moreover, it would be better if we practiced CBL over a longer period of time in a wider range of labs equipped with appropriate infrastructure and incorporating a newly developed online learning environment. Finally, this study is limited by the content differences between the first and fourth semesters that might have influenced outcomes, potentially confounding the comparison of academic performance. Additionally, the absence of a control group may be considered another limitation. Future research should consider incorporating a control group to enable a more robust and direct comparison. While CBL holds immense potential to transform applied physiology education into a clinical context, its implementation could present unique challenges. These potential challenges of implementing CBL in applied physiology could be addressed through proposed strategies based on our experience; illustrated in Table . These suggestions could help to construct a better CBL framework and empower students to become active participants for better engagement in CBL. By addressing these challenges, students could acquire a broad scale of critical thinking abilities and collaboration skills vital to adequately preparing them for the complexities of their future clinical practice. CBL can be broadly implemented as a more interactive teaching tool not only in applied physiology but also in other health sciences to overcome the limitations of TTMs and ensure better outcomes. While the current study focuses on short-term academic performance as an indicator of concurrent TTMs and CBL’s effectiveness, we recommend implementing follow-up assessments in subsequent semesters or at the end of the program to capture the long-term impact of CBL on knowledge retention. These assessments should test foundational concepts gained during physiology courses to evaluate the longevity of knowledge retention. Additionally, future studies could explore the perception and effectiveness of CBL when implemented independently of TTMs, enabling a clearer understanding of its isolated impact on student learning outcomes. Based on the positive results of the current study, we recommend integrating CBL in other courses within the physiotherapy program. This consistent application of active learning strategies can reinforce knowledge retention through more engagement and motivation of students, leading to better encoding and retrieval of information and promoting better academic outcomes. Additionally a longitudinal study should be conducted to track student performance and knowledge retention throughout the physiotherapy program that would provide valuable insights into the long-term effects of CBL. Also these studies would be instrumental in assessing clinical competence and patient outcomes. Incorporation of CBL into the existing TTMs framework for teaching applied physiology was advantageous for physiotherapy students as a preliminary step for their entry into clinical practice and ultimately in successfully managing patients, as it encourages students to pursue self-directed learning and to develop both analytical as well as problem-solving skills. This hybrid teaching tool with integration of CBL in applied physiology encourages active learning, helps physiotherapy students gain the requisite knowledge, and enhances their analytical and communication skills. The interactive and contextually relevant nature of CBL accommodates different learning styles, catering to visual, auditory, and kinesthetic learners alike. By engaging students in real-world scenarios, CBL fosters critical thinking, problem-solving, and clinical reasoning skills, all of which are essential for professional practice. Supplemenentary Material 1. |
Exploratory DNA methylation analysis in post-mortem heart tissue of sudden unexplained death | 0e2796c0-59a9-47d6-8db6-d4b46bad72ba | 11585171 | Forensic Medicine[mh] | Sudden cardiac death (SCD) is a devastating event, especially in young people. The first symptom of SCD is often death itself, leaving neither the affected person nor their relatives time to prepare for these extraordinary circumstances . By definition, a person has died of SCD if death occurred less than one hour after onset of symptoms and was witnessed. If unwitnessed, SCD is assumed if the person has been seen alive and well less than 24 h before death . One subgroup of SCD is termed sudden unexplained death (SUD) , which is commonly defined as deaths occurring in people older than 1 year. In these cases, potentially inherited cardiac conditions are often suspected to be the main cause of death . SUD cases may be termed sudden arrhythmic death syndrome (SADS) if they have a negative pathological and toxicological assessment . While sudden death in infants until the age of 1 year is classified as sudden infant death syndrome (SIDS) , sudden death in older people (> 50 years of age) often occurs due to chronic degenerative diseases like coronary artery disease, heart failure or valvular defects . Especially in young persons who have died suddenly and unexpectedly, it has been recommended by the International Heart Rhythm Society to perform a thorough forensic investigation. This includes a revision of the medical history of the deceased as well as a complete autopsy with a macroscopic, histopathological and toxicological assessment. Finally, a molecular autopsy is recommended for all SUD cases . There are already many studies that have investigated genetic variants in SUD. Since the first molecular autopsy in 1999 by Ackerman et al. , inherited cardiac conditions like cardiac channelopathies including long QT syndrome (LQTS), short QT syndrome (SQTS), Brugada syndrome (BrS), catecholaminergic polymorphic ventricular tachycardia (CPVT) or cardiomyopathic conditions including hypertrophic cardiomyopathy (HCM), dilated cardiomyopathy (DCM) and arrhythmogenic cardiomyopathy (ACM) are being investigated as the underlying cause of SCD, SADS and SUD . Generally, the pathophysiology of SUD is complex , and many genes have been associated with underlying cardiac diseases . The genes most commonly implicated in the development of cardiac channelopathies include KCNH2 , KCNJ2 , KCNQ1 , RYR2 and SCN5A . For many channelopathies like LQTS, it has further been reported that not all genetic variants lead to disease, and incomplete penetrance is a common feature . For cardiomyopathic conditions, variants in the genes MYH7 , JPH2 , TNNI3 , TTN , LMNA , NEXN, PKP2 , DSG2 , DSP , TMEM43 , MYBPC3 , JUP and others have been often observed as having functional effects on cardiac proteins . Other commonly described genes are the two transcription factors TBX5 and GATA4 that have been implicated in SUD . They are responsible for cardiovascular development and if mutated can cause congenital heart defects and progressive cardiac conduction disorders . Despite the many genetic variants that have been identified as SCD and SUD contributors, many cases still present no clear genetic causality . Therefore, a few studies have investigated gene regulation (DNA methylation) and gene expression (RNA) to find potential risk markers and elucidate the complex etiology of SUD. So far, most of the DNA methylation studies have focused on targeted regions or specific conditions (e.g. arrhythmias or sudden death in epilepsy (SUDEP)). One study identified differential methylation of the ABCA1 promoter region between a case cohort of SCD victims and a control cohort . Here, they targeted the ABCA1 gene promoter with a methyl-specific PCR. Other studies tried to identify differential methylation to either predict the phenotype severity of arrhythmic conditions or the development of DCM but failed in doing so. Being able to explain potential underlying causes for the sudden death of a beloved person can help relatives find closure for their loss and motivate them to seek counselling in case of a genetic familial inheritance. However, as molecular autopsies of the deceased often remain negative, it is crucial to expand the knowledge on potential genetic or epigenetic causes that contribute to sudden death events. At the Zurich Institute of Forensic Medicine (ZIFM), University of Zurich, Switzerland, approximately 10 sudden death cases are reported annually without any explanation of the exact cause of death. To our knowledge, no study has previously investigated the entire human methylome in SUD cases. Our study, therefore, aims to fill this gap. We will explore the human methylome using the Infinium TM MethylationEPIC v2.0 BeadChip kit in three different SUD groups and compare the results to that of a control cohort. We will also describe genes and biological pathways of the identified differentially methylated regions (DMRs) between SUD and control samples. Study cohort The study cohort comprised a total of 54 unrelated individuals in the SUD cohort and 20 unrelated individuals in the control cohort who all died between the age of one and 62 years. Left ventricle heart tissue was collected during autopsies at the ZIFM between 2013 and 2022. The heart tissue sections were shock-frozen in liquid nitrogen and then stored at −80 [12pt]{minimal} $$^$$ ∘ C. Despite the general definition of SUD cases being younger than 50 years of age, we included individuals up to 62 years of age in this study if there had been no other explanation for the sudden death event . All individuals were examined according to a standardized procedure, including a complete autopsy, death scene investigation and toxicological and histopathological screening. The following characteristics were extracted from autopsy reports for all deceased persons: sex, age, post-mortem interval, body mass index (BMI), body height, body weight, heart weight, symptoms prior to death, circumstances at death, reanimation, alcohol status, illicit substance consumption, medical history, microbiological findings and family history. All cases were categorized into one of the following groups according to their case history and histopathological and morphological appearance of the heart: (1) primary normal condition (primaryN), (2) primary cardiomyopathy condition (primaryCM), (3) secondary condition (secondary) and (4) control samples. The primaryN group (n = 26) comprised deceased persons whose heart appeared both macroscopically and histologically inconspicuous, and arrhythmia was the most likely underlying cause of death. Potentially, nine cases in the primaryN group might be classified as sudden unexpected death in epilepsy (SUDEP) because they were either diagnosed with epilepsy prior to death or SUDEP was assumed to be the most likely cause of death in the absence of a terminal seizure. The primaryCM group ( n = 18) included cases with a macroscopically and histologically abnormal heart, most of which were likely to be cardiomyopathies. All cases where a prior disease (e.g. hypertension, old infarction scars, past myocarditis, etc.) led to a heart condition are summarized in the secondary group ( n = 10). Finally, the control group ( n = 20) consisted of suddenly deceased individuals that did not present with any heart-related condition nor SUD, but died because of other non-violent causes (including drowning, illicit substance overdose and accidents). For the categorization into the three SUD groups, a second expert opinion was obtained. Following the recommendations in , the initial interpretation of the SUD cases by the physicians at the ZIFM was re-examined by a cardiopathologist, who re-evaluated the histological specimen in combination with the macroscopic photographs and case history. In cases where no histological slides were available ( n = 12), the initial autopsy report and macroscopic photographs were reviewed by the cardiopathologist. A summary of the above-mentioned characteristics for the entire study cohort can be found in Supplementary Table . For statistical comparison of the metadata among the groups, a student’s t test was used for continuous variables and a fisher’s exact test for categorical variables. DNA extraction, bisulfite conversion and genome-wide methylation assessment DNA from approximately 10 mg left ventricle heart tissue was extracted with the QIAcube instrument (Qiagen, Hilden, Germany) according to the QIAmp ® DNA Investigator kit (Qiagen). Based on a study that predicted the amount of usable chip probes from DNA degradation , samples of sufficient quantity and quality were selected before proceeding with the bisulfite treatment. Up to 250 ng DNA was bisulfite converted with the EZ DNA Methylation TM Kit (Zymo Research, Irvine, CA, USA) following the manufacturer‘s instructions for use with the Infinium ® Methylation Assay (Illumina, San Diego CA, USA). The human methylome of the samples was assessed with the Infinium TM MethylationEPIC v2.0 BeadChip kit (Illumina) on an iScan instrument (Illumina). This current advancement of the original MethylationEPIC BeadChip kit (Illumina) targets more than 935,000 CpG sites across the human genome. All samples were run in singlets in three batches. Data analysis Data quality control Analysis of the raw data was primarily performed with the R package SeSaMe . The data quality was checked by assessing the signal background, the mean signal intensity, the number of NAs and the bisulfite conversion efficiency. All samples fulfilled the required quality criteria (see Supplementary Fig. ). A principal component analysis (PCA) was performed on the M-values with the R function prcomp . Factors with potentially confounding effects on principle components (PCs) were identified with the plomics package by testing for an association between independent and dependant variables through linear regression. In the first five PCs, factors showing a statistically significant impact ( p value < 0.05) on the results were identified. Consequently, all downstream analyses included a correction for all these factors. Differentially methylated regions DMRs were determined with the DMR function from the SeSaMe package, including a function argument for the previously identified confounding factors. The enrichGO function from the clusterProfiler package was used for gene ontology (GO) analyses. Prediction of the biological age For the prediction of the biological age, Horvath’s clock was applied to each sample using the R package methylclock . Normal distribution of the predictions was checked with the Shapiro–Wilk normality test, and the prediction accuracies between the case and control groups were compared with the Kruskal–Wallis rank sum test. Exome analysis For most of the case samples of this study, exome data were available from previous studies . A summary of these results can be found in the Supplementary Table . The study cohort comprised a total of 54 unrelated individuals in the SUD cohort and 20 unrelated individuals in the control cohort who all died between the age of one and 62 years. Left ventricle heart tissue was collected during autopsies at the ZIFM between 2013 and 2022. The heart tissue sections were shock-frozen in liquid nitrogen and then stored at −80 [12pt]{minimal} $$^$$ ∘ C. Despite the general definition of SUD cases being younger than 50 years of age, we included individuals up to 62 years of age in this study if there had been no other explanation for the sudden death event . All individuals were examined according to a standardized procedure, including a complete autopsy, death scene investigation and toxicological and histopathological screening. The following characteristics were extracted from autopsy reports for all deceased persons: sex, age, post-mortem interval, body mass index (BMI), body height, body weight, heart weight, symptoms prior to death, circumstances at death, reanimation, alcohol status, illicit substance consumption, medical history, microbiological findings and family history. All cases were categorized into one of the following groups according to their case history and histopathological and morphological appearance of the heart: (1) primary normal condition (primaryN), (2) primary cardiomyopathy condition (primaryCM), (3) secondary condition (secondary) and (4) control samples. The primaryN group (n = 26) comprised deceased persons whose heart appeared both macroscopically and histologically inconspicuous, and arrhythmia was the most likely underlying cause of death. Potentially, nine cases in the primaryN group might be classified as sudden unexpected death in epilepsy (SUDEP) because they were either diagnosed with epilepsy prior to death or SUDEP was assumed to be the most likely cause of death in the absence of a terminal seizure. The primaryCM group ( n = 18) included cases with a macroscopically and histologically abnormal heart, most of which were likely to be cardiomyopathies. All cases where a prior disease (e.g. hypertension, old infarction scars, past myocarditis, etc.) led to a heart condition are summarized in the secondary group ( n = 10). Finally, the control group ( n = 20) consisted of suddenly deceased individuals that did not present with any heart-related condition nor SUD, but died because of other non-violent causes (including drowning, illicit substance overdose and accidents). For the categorization into the three SUD groups, a second expert opinion was obtained. Following the recommendations in , the initial interpretation of the SUD cases by the physicians at the ZIFM was re-examined by a cardiopathologist, who re-evaluated the histological specimen in combination with the macroscopic photographs and case history. In cases where no histological slides were available ( n = 12), the initial autopsy report and macroscopic photographs were reviewed by the cardiopathologist. A summary of the above-mentioned characteristics for the entire study cohort can be found in Supplementary Table . For statistical comparison of the metadata among the groups, a student’s t test was used for continuous variables and a fisher’s exact test for categorical variables. DNA from approximately 10 mg left ventricle heart tissue was extracted with the QIAcube instrument (Qiagen, Hilden, Germany) according to the QIAmp ® DNA Investigator kit (Qiagen). Based on a study that predicted the amount of usable chip probes from DNA degradation , samples of sufficient quantity and quality were selected before proceeding with the bisulfite treatment. Up to 250 ng DNA was bisulfite converted with the EZ DNA Methylation TM Kit (Zymo Research, Irvine, CA, USA) following the manufacturer‘s instructions for use with the Infinium ® Methylation Assay (Illumina, San Diego CA, USA). The human methylome of the samples was assessed with the Infinium TM MethylationEPIC v2.0 BeadChip kit (Illumina) on an iScan instrument (Illumina). This current advancement of the original MethylationEPIC BeadChip kit (Illumina) targets more than 935,000 CpG sites across the human genome. All samples were run in singlets in three batches. Data quality control Analysis of the raw data was primarily performed with the R package SeSaMe . The data quality was checked by assessing the signal background, the mean signal intensity, the number of NAs and the bisulfite conversion efficiency. All samples fulfilled the required quality criteria (see Supplementary Fig. ). A principal component analysis (PCA) was performed on the M-values with the R function prcomp . Factors with potentially confounding effects on principle components (PCs) were identified with the plomics package by testing for an association between independent and dependant variables through linear regression. In the first five PCs, factors showing a statistically significant impact ( p value < 0.05) on the results were identified. Consequently, all downstream analyses included a correction for all these factors. Differentially methylated regions DMRs were determined with the DMR function from the SeSaMe package, including a function argument for the previously identified confounding factors. The enrichGO function from the clusterProfiler package was used for gene ontology (GO) analyses. Prediction of the biological age For the prediction of the biological age, Horvath’s clock was applied to each sample using the R package methylclock . Normal distribution of the predictions was checked with the Shapiro–Wilk normality test, and the prediction accuracies between the case and control groups were compared with the Kruskal–Wallis rank sum test. Exome analysis For most of the case samples of this study, exome data were available from previous studies . A summary of these results can be found in the Supplementary Table . Analysis of the raw data was primarily performed with the R package SeSaMe . The data quality was checked by assessing the signal background, the mean signal intensity, the number of NAs and the bisulfite conversion efficiency. All samples fulfilled the required quality criteria (see Supplementary Fig. ). A principal component analysis (PCA) was performed on the M-values with the R function prcomp . Factors with potentially confounding effects on principle components (PCs) were identified with the plomics package by testing for an association between independent and dependant variables through linear regression. In the first five PCs, factors showing a statistically significant impact ( p value < 0.05) on the results were identified. Consequently, all downstream analyses included a correction for all these factors. DMRs were determined with the DMR function from the SeSaMe package, including a function argument for the previously identified confounding factors. The enrichGO function from the clusterProfiler package was used for gene ontology (GO) analyses. For the prediction of the biological age, Horvath’s clock was applied to each sample using the R package methylclock . Normal distribution of the predictions was checked with the Shapiro–Wilk normality test, and the prediction accuracies between the case and control groups were compared with the Kruskal–Wallis rank sum test. For most of the case samples of this study, exome data were available from previous studies . A summary of these results can be found in the Supplementary Table . Study cohort From the 74 deceased persons included in this study, 20 were control samples ( n female = 6, n male = 14, mean age female = 29.4 years, mean age male = 25.5 years), 26 were classified as primaryN ( n female = 6, n male = 20, mean age female = 29.3 years, mean age male = 30.7 years), 18 as primaryCM ( n female = 8, n male = 10, mean age female = 32.2 years, mean age male = 32.8 years) and 10 as secondary ( n female = 3, n male = 7, mean age female = 34.9 years, mean age male = 40.6 years). Across all groups, 7 individuals were above the age of 50 ( n primaryN group = 1, n primaryCM group = 3, n secondary group = 2, n control group = 1). A summary of all comparisons can be found in Supplementary Table . Statistical comparisons between each case group and the control group were performed on all available metadata to investigate possible confounding variables caused by the study design. The comparison between the primaryN and the control group showed one statistically significant difference, namely in illicit substance consumption, as more primaryN individuals had not consumed illicit substances (primaryN: n = 22/26, control: n = 7/20, p value = 0.023). Also, the comparison between the primaryCM and the control group yielded a statistically significant difference in the consumption of illicit substances. More controls than primaryCM individuals had consumed amphetamines, cannabis, benzodiazepines, cocaine and opiates (primaryCM: n = 0/18, control: n = 6/20, p value = 0.010), all of which are likely QT interval prolonging according to www.crediblemeds.org , www.brugadadrugs.com and personal communication. Additionally, the heart weight was statistically significantly higher in the primaryCM group than in the control group (primaryCM: mean (SD) = 420 g (154.08 g), control: mean (SD) = 306.5 g (108.25 g), p value = 0.014). When comparing the secondary and the control group, a statistically significantly older mean age was observed for the secondary group (secondary: mean (SD) = 38.92 years (11.51 years), control: mean (SD) = 26.65 years (11.98 years), p value = 0.014) as well as a statistically significantly shorter post-mortem interval (mean secondary (SD) = 22 h (9.36 h), mean control (SD) = 36.9 h (27.52 h), p value = 0.038) and a statistically significantly longer storage time of the tissue (secondary: mean (SD) = 8.2 years (1.81 years), control: mean (SD) = 4.3 years (2.15 years), p value = 0.00004). Finally, no one in the secondary group died during normal daily activities (e.g. working in an office) compared to five people in the control group ( p value = 0.026), and heart weight was larger in the secondary group compared to the control group (secondary: mean (SD) = 434 g (135.42 g), control: mean (SD) = 306.5 g (108.25 g), p value = 0.020). Age prediction A common assessment in methylome studies is the prediction of the biological age based on DNA methylation changes at age-associated CpG sites . Therefore, a prediction of the biological age based on Horvath’s clock was performed on all samples (see Supplementary Fig. ). No statistically significant differences in the prediction accuracy were observed among the three case groups and the control group ( p value = 0.13). Additionally, all three case groups were considered together and compared to the control group, which yielded a non-significant result in the prediction accuracy ( p value = 0.063). Principle component analysis An initial investigation of potentially confounding factors revealed twelve statistically significant correlations ( p value < 0.05) with PCs: sex, age, post-mortem interval, body weight, BMI, batch, sample position on the array chip, sample storage time, reanimation, medical history, event at death and alcohol consumption (see Supplementary Fig. ). Based on these results, a correction for the confounding variables was carried out in the subsequent analysis for DMRs. Case/control annotation was statistically significantly correlated ( p value < 0.05) with PC4, PC6, PC7 and PC9 within the first ten PCs. PCA plots were generated for all combinations of these PCs with PC6 versus PC9 yielding the clearest clustering of the case and control groups (see Supplementary Fig. ). PrimaryN case group Differentially methylated regions DMRs between the primaryN and the control groups were identified by filtering for statistically significant regions ( p value < 0.05) with an absolute difference in the region’s beta values of at least 0.1 ( [12pt]{minimal} $$$$ Δ beta = 0.1). All investigated regions are also visualized in a Volcano plot (see Supplementary Fig. ). For the primaryN group, 605 DMRs were found. Of those, 366 were statistically significantly hypomethylated ( p value < 0.05), while 239 were statistically significantly hypermethylated ( p value < 0.05). All DMRs were associated with a total of 524 genes (see Supplementary Table ). Gene ontology GO analysis of biological pathways revealed primarily cell and organ development-related pathways (Fig. ). Notably, the pathway with the lowest false discovery rate (FDR) value was the heart outflow tract morphogenesis. Other pathways with low FDRs included organ growth, cell fate commitment, anterior/posterior pattern specification, collagen fibril organization and artery development. The GO analysis of molecular functions or cellular components did not identify any common functions or components for the statistically significant genes of the primaryN cases. Genes The genes associated with the 20 DMRs with the lowest p values were investigated in more detail. All 20 genes are listed in Table , and a heatmap with all CpG sites corresponding to these 20 DMRs can be found in Supplementary Fig. . A GO term analysis investigating common biological pathways, molecular functions or cellular components of these genes did not yield any results. Some of the 20 genes are involved in regulating gene expression (long non-coding RNAs, a microRNA and a homeobox gene) or displaying enzymatic activities (a NADH dehydrogenase component, a RING finger domain component and a methyltransferase). PrimaryCM case group Differentially methylated regions A total of 63 DMRs were identified for the primaryCM group in comparison to the controls. All inspected regions were also visualized in a Volcano plot (see Supplementary Fig. ). Of these 63 identified DMRs, 32 were hypomethylated and 31 were hypermethylated. These DMRs were associated with 58 genes (see Supplementary Table ). Gene ontology The GO analysis did not reveal any biological pathways or cellular components for the DMR associated genes. However, four molecular functions common to several of those genes were found, namely p53 binding, lysine-acetylated histone binding, acetylation-dependent protein binding and transcription coactivator activity (see Fig. ). Genes The 20 DMRs with the lowest p values were associated with 19 genes (Table ). One DMR was not associated with any gene. GO analysis revealed an enrichment for genes associated with the epigenetic regulation of gene expression. A heatmap of all CpG sites belonging to the DMRs associated with a gene can be found in Supplementary Fig. . Secondary case group We did not find any statistically significant DMRs for the comparison of the secondary group and the control cohort. Exome analysis The previously published exome results were compared to the methylation results by investigating the beta values in samples with a pathogenic or likely pathogenic variant in contrast to all other samples. The beta values in CpG sites of all genes with (likely) pathogenic variants were visually inspected in boxplots with the exception of LZTR1 and CALR3, for which no CpG sites are covered in the Infinium TM MethylationEPIC v2.0 BeadChip kit (see Supplementary Fig. ). Samples with (likely) pathogenic variants were not found to have any significantly different beta values across the genes affected by the variants. From the 74 deceased persons included in this study, 20 were control samples ( n female = 6, n male = 14, mean age female = 29.4 years, mean age male = 25.5 years), 26 were classified as primaryN ( n female = 6, n male = 20, mean age female = 29.3 years, mean age male = 30.7 years), 18 as primaryCM ( n female = 8, n male = 10, mean age female = 32.2 years, mean age male = 32.8 years) and 10 as secondary ( n female = 3, n male = 7, mean age female = 34.9 years, mean age male = 40.6 years). Across all groups, 7 individuals were above the age of 50 ( n primaryN group = 1, n primaryCM group = 3, n secondary group = 2, n control group = 1). A summary of all comparisons can be found in Supplementary Table . Statistical comparisons between each case group and the control group were performed on all available metadata to investigate possible confounding variables caused by the study design. The comparison between the primaryN and the control group showed one statistically significant difference, namely in illicit substance consumption, as more primaryN individuals had not consumed illicit substances (primaryN: n = 22/26, control: n = 7/20, p value = 0.023). Also, the comparison between the primaryCM and the control group yielded a statistically significant difference in the consumption of illicit substances. More controls than primaryCM individuals had consumed amphetamines, cannabis, benzodiazepines, cocaine and opiates (primaryCM: n = 0/18, control: n = 6/20, p value = 0.010), all of which are likely QT interval prolonging according to www.crediblemeds.org , www.brugadadrugs.com and personal communication. Additionally, the heart weight was statistically significantly higher in the primaryCM group than in the control group (primaryCM: mean (SD) = 420 g (154.08 g), control: mean (SD) = 306.5 g (108.25 g), p value = 0.014). When comparing the secondary and the control group, a statistically significantly older mean age was observed for the secondary group (secondary: mean (SD) = 38.92 years (11.51 years), control: mean (SD) = 26.65 years (11.98 years), p value = 0.014) as well as a statistically significantly shorter post-mortem interval (mean secondary (SD) = 22 h (9.36 h), mean control (SD) = 36.9 h (27.52 h), p value = 0.038) and a statistically significantly longer storage time of the tissue (secondary: mean (SD) = 8.2 years (1.81 years), control: mean (SD) = 4.3 years (2.15 years), p value = 0.00004). Finally, no one in the secondary group died during normal daily activities (e.g. working in an office) compared to five people in the control group ( p value = 0.026), and heart weight was larger in the secondary group compared to the control group (secondary: mean (SD) = 434 g (135.42 g), control: mean (SD) = 306.5 g (108.25 g), p value = 0.020). A common assessment in methylome studies is the prediction of the biological age based on DNA methylation changes at age-associated CpG sites . Therefore, a prediction of the biological age based on Horvath’s clock was performed on all samples (see Supplementary Fig. ). No statistically significant differences in the prediction accuracy were observed among the three case groups and the control group ( p value = 0.13). Additionally, all three case groups were considered together and compared to the control group, which yielded a non-significant result in the prediction accuracy ( p value = 0.063). An initial investigation of potentially confounding factors revealed twelve statistically significant correlations ( p value < 0.05) with PCs: sex, age, post-mortem interval, body weight, BMI, batch, sample position on the array chip, sample storage time, reanimation, medical history, event at death and alcohol consumption (see Supplementary Fig. ). Based on these results, a correction for the confounding variables was carried out in the subsequent analysis for DMRs. Case/control annotation was statistically significantly correlated ( p value < 0.05) with PC4, PC6, PC7 and PC9 within the first ten PCs. PCA plots were generated for all combinations of these PCs with PC6 versus PC9 yielding the clearest clustering of the case and control groups (see Supplementary Fig. ). Differentially methylated regions DMRs between the primaryN and the control groups were identified by filtering for statistically significant regions ( p value < 0.05) with an absolute difference in the region’s beta values of at least 0.1 ( [12pt]{minimal} $$$$ Δ beta = 0.1). All investigated regions are also visualized in a Volcano plot (see Supplementary Fig. ). For the primaryN group, 605 DMRs were found. Of those, 366 were statistically significantly hypomethylated ( p value < 0.05), while 239 were statistically significantly hypermethylated ( p value < 0.05). All DMRs were associated with a total of 524 genes (see Supplementary Table ). Gene ontology GO analysis of biological pathways revealed primarily cell and organ development-related pathways (Fig. ). Notably, the pathway with the lowest false discovery rate (FDR) value was the heart outflow tract morphogenesis. Other pathways with low FDRs included organ growth, cell fate commitment, anterior/posterior pattern specification, collagen fibril organization and artery development. The GO analysis of molecular functions or cellular components did not identify any common functions or components for the statistically significant genes of the primaryN cases. Genes The genes associated with the 20 DMRs with the lowest p values were investigated in more detail. All 20 genes are listed in Table , and a heatmap with all CpG sites corresponding to these 20 DMRs can be found in Supplementary Fig. . A GO term analysis investigating common biological pathways, molecular functions or cellular components of these genes did not yield any results. Some of the 20 genes are involved in regulating gene expression (long non-coding RNAs, a microRNA and a homeobox gene) or displaying enzymatic activities (a NADH dehydrogenase component, a RING finger domain component and a methyltransferase). DMRs between the primaryN and the control groups were identified by filtering for statistically significant regions ( p value < 0.05) with an absolute difference in the region’s beta values of at least 0.1 ( [12pt]{minimal} $$$$ Δ beta = 0.1). All investigated regions are also visualized in a Volcano plot (see Supplementary Fig. ). For the primaryN group, 605 DMRs were found. Of those, 366 were statistically significantly hypomethylated ( p value < 0.05), while 239 were statistically significantly hypermethylated ( p value < 0.05). All DMRs were associated with a total of 524 genes (see Supplementary Table ). GO analysis of biological pathways revealed primarily cell and organ development-related pathways (Fig. ). Notably, the pathway with the lowest false discovery rate (FDR) value was the heart outflow tract morphogenesis. Other pathways with low FDRs included organ growth, cell fate commitment, anterior/posterior pattern specification, collagen fibril organization and artery development. The GO analysis of molecular functions or cellular components did not identify any common functions or components for the statistically significant genes of the primaryN cases. The genes associated with the 20 DMRs with the lowest p values were investigated in more detail. All 20 genes are listed in Table , and a heatmap with all CpG sites corresponding to these 20 DMRs can be found in Supplementary Fig. . A GO term analysis investigating common biological pathways, molecular functions or cellular components of these genes did not yield any results. Some of the 20 genes are involved in regulating gene expression (long non-coding RNAs, a microRNA and a homeobox gene) or displaying enzymatic activities (a NADH dehydrogenase component, a RING finger domain component and a methyltransferase). Differentially methylated regions A total of 63 DMRs were identified for the primaryCM group in comparison to the controls. All inspected regions were also visualized in a Volcano plot (see Supplementary Fig. ). Of these 63 identified DMRs, 32 were hypomethylated and 31 were hypermethylated. These DMRs were associated with 58 genes (see Supplementary Table ). Gene ontology The GO analysis did not reveal any biological pathways or cellular components for the DMR associated genes. However, four molecular functions common to several of those genes were found, namely p53 binding, lysine-acetylated histone binding, acetylation-dependent protein binding and transcription coactivator activity (see Fig. ). Genes The 20 DMRs with the lowest p values were associated with 19 genes (Table ). One DMR was not associated with any gene. GO analysis revealed an enrichment for genes associated with the epigenetic regulation of gene expression. A heatmap of all CpG sites belonging to the DMRs associated with a gene can be found in Supplementary Fig. . A total of 63 DMRs were identified for the primaryCM group in comparison to the controls. All inspected regions were also visualized in a Volcano plot (see Supplementary Fig. ). Of these 63 identified DMRs, 32 were hypomethylated and 31 were hypermethylated. These DMRs were associated with 58 genes (see Supplementary Table ). The GO analysis did not reveal any biological pathways or cellular components for the DMR associated genes. However, four molecular functions common to several of those genes were found, namely p53 binding, lysine-acetylated histone binding, acetylation-dependent protein binding and transcription coactivator activity (see Fig. ). The 20 DMRs with the lowest p values were associated with 19 genes (Table ). One DMR was not associated with any gene. GO analysis revealed an enrichment for genes associated with the epigenetic regulation of gene expression. A heatmap of all CpG sites belonging to the DMRs associated with a gene can be found in Supplementary Fig. . We did not find any statistically significant DMRs for the comparison of the secondary group and the control cohort. The previously published exome results were compared to the methylation results by investigating the beta values in samples with a pathogenic or likely pathogenic variant in contrast to all other samples. The beta values in CpG sites of all genes with (likely) pathogenic variants were visually inspected in boxplots with the exception of LZTR1 and CALR3, for which no CpG sites are covered in the Infinium TM MethylationEPIC v2.0 BeadChip kit (see Supplementary Fig. ). Samples with (likely) pathogenic variants were not found to have any significantly different beta values across the genes affected by the variants. In this study, we compared the human methylome in post-mortem ventricular tissue of SUD cases to a suddenly deceased non-SUD control cohort. We divided the SUD cases into three groups: (1) primaryN, whose heart appeared morphologically and histopathologically normal, suspected to be primarily arrhythmias; (2) primaryCM, whose heart was morphologically enlarged and histopathologically conspicuous, compatible with a primary cardiomyopathy; and (3) secondary, whose heart failure was suspected to result from a prior condition (e.g. hypertension, past myocarditis, etc.). With this subdivision into three groups, we strived to obtain meaningful results that are least possibly affected by the heterogeneity of our study cohort. Additionally, this should also ensure to have big enough groups to perform our analysis. As part of our grouping criteria, we included the heart weight as an indicator for a normally weighted or enlarged heart. For all SUD cases, the heart weight was measured after the opening of the heart chambers and the removing of blood and clots. Though we are aware that this might lead to flawed heart weights in comparison to measurements before chamber opening , all cases are consistently measured in the same way ensuring a comparability within our cohort. The distribution of the metadata among the three case groups and the control group was statistically significantly different for the consumption of illicit substances in the primaryN group, for the heart weight and the consumption of illicit substances in the primaryCM group and for age, post-mortem interval, storage time, heart weight and the event at death in the secondary group. The smaller number of persons consuming illicit substances in the primaryN and primaryCM group compared to the control group is explained because the control cohort explicitly includes sudden death cases due to illicit substance overdoses. Though this could potentially affect methylation patterns, we are assuming that it does not do so in heart tissue and should therefore be irrelevant for this study. An additional finding in the primaryCM group was the statistically significantly larger heart weight, which may be explained by the definition of this case group as it included all cases with a morphologically enlarged heart. In the secondary group, both the greater heart weight and the older age can be explained by the definition of this group. As it includes cases whose cardiac dysfunction (both in normally weighted and enlarged hearts) is due to a previous morbidity or accumulating lifestyle factors, it is logical that such cases tend to be found in older individuals. The other statistically significant differences between this case group and the control group (shorter post-mortem interval, longer storage time and smaller number of deaths that occurred during normal daily activities like working in an office) are assumed to be accidental findings. We adjusted for the statistically significantly different factors in our analysis. In contrast to many other SUD studies, this study also included a small number of cases over the age of 50 years (case groups n = 6, control group n = 1). Since these individuals did not exhibit unexpected or significantly different methylation patterns compared to the other cases within their respective groups, it is reasonable to assume that excluding these cases would not have significantly impacted the results. Given that these individuals experienced an unexpected death similar to all individuals under 50, their inclusion in the analysis was deemed appropriate for this study. The investigation of the methylome found that the primaryN group yielded the most promising results out of all three case groups. Of the 524 genes associated with DMRs, some were directly involved in heart-related biological pathways, such as the outflow tract morphogenesis, which is crucial for blood supply to the arteries during heart development and in the adult heart , the artery development and trabecular morphogenesis. Although neither of these pathways are directly related to arrhythmic cardiac conditions, which are suspected to be the cause of death in this case group, we hypothesize that even general abnormalities in heart morphogenesis could have contributed to SUD through an arrhythmic condition. All other observed pathways were not directly related to heart functions. This may indicate that some differential methylation can be observed in heart-related pathways, although the majority of DMRs are found in genes associated with more general functions like organ growth, cell fate commitment, mesoderm development, mesenchyme development and connective tissue development. These findings suggest that differential methylation does not affect just a single heart-associated pathway but most likely affects a combination of different biological pathways whose complex interactions may have contributed to a fatal heart condition. In the primaryCM group, no common biological pathways were identified. This could be explained by the relatively small number of genes associated with DMRs ( n = 58). In addition, this case group includes a variety of suspected cardiomyopathies like HCM or DCM which might make it difficult to identify common pathways. Notably, four common molecular functions were identified (p53 binding, lysine-acetylated histone binding, acetylation-dependent protein binding and transcription coactivator activity), all of which are related to gene regulation. We speculate that changes in such regulatory functions may have contributed to the malfunction of the heart by adding to its abnormal growth. For instance, p53 has been shown to contribute to the development of cardiovascular diseases via mitochondrial dysfunction and was reported to be elevated in DCM or diabetic cardiomyopathy . Furthermore, histone acetylation inhibition was reported to prevent ventricular hypertrophy development in rats which could indicate that differential expression in such pathways might contribute to an altered heart morphology as well . Amongst the genes associated with the DMRs with the lowest p values in the primaryN group, it was striking that one gene, FGF12 , had already been shown to be associated with cardiac arrhythmia and early onset of epilepsy in mice . FGF12 is involved in multiple biological processes including embryonic development, cell growth, morphogenesis, tissue repair, tumour growth and invasion . Genetic variants have also been implicated in developmental and epileptic encephalitis as well as autism spectrum disorder. Although the previously reported association with SUD was due to pathogenic variants of FGF12 , a deregulation of this gene caused by changes in its methylation pattern may contribute to its pathogenic effect. Another gene with reported heart specific functions found in the primaryN group was TMEM88 . TMEM88 encodes the transmembrane protein 88 that is a suppressor of the Wnt/ [12pt]{minimal} $$$$ β -catenin signalling pathway, which regulates cardiovascular progenitor cell specification . It has been shown that the knockdown of TMEM88 results in a cell fate shift towards endothelial rather than cardiomyocyte development. Therefore, its deregulation through differential methylation may also have a detrimental effect on the heart. Of the other genes associated with the top 20 DMRs, several genes are involved in tumourigenesis , others contribute to different diseases like Alzheimer’s , retinitis pigmentosa , hearing loss , leucocyte adhesion deficiency or autoimmune diseases . As there was no clear common denominator for most of these genes, further studies will be required to elucidate their role in SUD. Additionally, it might be beneficial to investigate other tissues than heart, especially in suspected SUDEP cases where an investigation of brain tissue may provide further insights. The exclusion of all suspected SUDEP cases might have led to subtle differences in the overall results, though predicting the extent of this impact is challenging. It is possible that the variation observed in the methylation patterns would have been slightly reduced, even though this study focused on heart rather than brain tissue. Nonetheless, the genes associated with the lowest p values for this group did not include genes related to epileptic conditions, suggesting that SUDEP cases were unlikely to be a key factor in the observed outcomes, and their exclusion would likely not have significantly altered the results for this group. Amongst the genes associated with the DMRs with the lowest p values in the primaryCM group, MYBPH was previously reported to be associated with cardiac disorders, including HCM ; however, further studies are needed to confirm these findings. Some of the other significant genes include one gene involved in cancer development and genes with regulatory functions like chromatin structure regulation , cell proliferation or long non-coding RNAs. It is striking that some of the 20 DMRs with the lowest p values in both primaryN and primaryCM group are driven by only one case being significantly differentially methylated than the other cases of that group and the entire control cohort. While these DMRs are not representative for the whole case group, finding differential methylation in one case could still be of great importance for the respective case. An example of this is SUD062 in the primary CM group, where we identified KCNQ1 as differentially methylated. Pathogenic variation in KCNQ1 cause LQTS and KCNQ1 is a recommended target for genetic testing in SUD cases . Although the primaryCM group does not include LQTS, recent studies have suggested that cardiomyopathies and channelopathies might not be as genetically distinctive as previously assumed . In the same case, WNT6 was also identified to be differentially methylated. WNT6 is a member of the WNT gene family. It mediates various cellular functions, including some in heart tissue. Upon heart injury, the WNT signalling is inhibited to prevent further cardiac damage . This causes an increased development of cardiac progenitor cells and an overall inhibited proliferation, both of which contribute to a faster healing of the heart. In the context of SUD, differential expression of this gene might cause an impaired response to cardiac stress adding to the ultimate heart failure. Our overall results for the secondary group suggest no differential methylation between this case group and the control group. This is most likely due to the heterogeneity of the case group that includes SUD cases with a variety of prior conditions such as past myocarditis, chemotherapy, renal insufficiency, hypertension, previous mental disease and old infarction scars on the heart. Furthermore, this study only investigated DNA methylation patterns in heart tissue although the underlying primary condition in this case group is not always found in the heart. An investigation of a different tissue, like brain or kidney, in the secondary group might reveal more significant results regarding DNA methylation patterns. As differential methylation could be strongly influenced by imprinting, we compared the genes associated with DMRs from all case groups to a list of 150 previously reported imprinted genes . Six of these were present in our results ( KCNQ1 , KCNQ1DN , WT1 , BLCAP , NNAT and DLGAP2 ); however, their imprinting patterns were reported to be elsewhere than in heart tissue . Therefore, we suggest that imprinting did not affect the results of our study. In methylome studies, it is common practice to predict the biological age based on DNA methylation changes at age-associated CpG sites . An overall under- or overestimation of age in the case groups of this study could be a general indicator for a more or less exhausted methylome. We predicted biological age with Horvath’s original pan-tissue epigenetic clock based on 353 age-dependent CpG sites . We found no statistically significant differences among the case and control groups. With the relatively limited sample size, the power to detect small differences was not big, and therefore, it cannot be excluded that a biological age difference exists nonetheless between the case and control groups. A bigger sample size would be needed to confirm this. Taken together, our findings suggest a moderate influence of differential methylation patterns on some biological pathways and genes associated with the human heart. This aligns with other studies that have found methylation differences in heart diseases such as various forms of cardiomyopathies including DCM and HCM . In addition, some studies have reported an overall moderate increase in global methylation levels in some cardiac diseases like coronary heart disease, acute coronary syndrome, cardiomyopathies or heart valve issues, while other diseases like atherosclerosis showed a global hypomethylation . This correlates well with our findings, as the primaryN and primaryCM group both showed a moderate global hypermethylation, while the secondary group showed a moderate hypomethylation compared to the control samples. Pathogenic or likely pathogenic variants were not found to affect the methylation levels in the respective samples suggesting that any methylation differences observed in this study should not be influenced by genetic alterations. However, a more detailed analysis of the exome-methylome interaction in these cases might reveal more insights in future studies. In this study, we aimed to show for the first time that whole DNA methylation analysis of SUD cases can be a potential addition to existing genetic investigations. We find that there are DMRs between cases with and without morphological abnormalities of the heart and a control group. We even identified some individual cases whose methylation patterns at some DMRs were significantly different from the control cohort and the remaining cases of the respective case group. Therefore, our study shows that DNA methylation may be an additional contributor to the development of SUD and a potential future target for therapeutic interventions in relatives. In addition, the investigation of other omics approaches could also greatly contribute to a better understanding of SUD in general. Supplementary Material 1. Supplementary Material 2. Supplementary Material 3. |
Long-term results of regenerative treatment of intrabony defects: a cohort study with 5-year follow-up | 5e5241ec-3919-43eb-958a-8e9db1a9b0df | 10771644 | Dental[mh] | Periodontal disease is a chronic infectious disease of the oral cavity with a prevalence of around 50%, which typically leads to destruction of the periodontal tissues and tooth loss [ – ]. Risk factors affecting the onset and deterioration of periodontal disease include modifiable factors, such as smoking, diabetes mellitus, oral pathologic microorganisms, and psychological stress, and non-modifiable factors, such as age, genetic factors, and host immune response . In addition, periodontal disease shares risk factors unidirectionally and bidirectionally with major chronic systemic diseases, including cardiovascular disease, diabetes mellitus, hypertension, rheumatoid arthritis, and osteoporosis [ – ]. Therefore, periodontitis-related complications are a growing public health concern associated with a high morbidity burden worldwide . Non-surgical and surgical periodontal procedures are widely used highly predictive treatment techniques, and their primary therapeutic goal is to maintain natural teeth and related soft and hard tissues functionally and healthily for a long time period . In particular, clinical studies on various periodontal regenerative procedures, including guided tissue regeneration (GTR), bone grafts, and enamel matrix protein derivatives (EMDs), have shown a tooth survival rate of over 90% and reported that periodontal conditions are successfully treated and stably maintained for over 10 years . A recent long-term cohort study reported a tooth loss rate of 2.6% and improved mean defect fill that was sustained for 10 years after periodontal regenerative surgery of intra-bony defects . Another long-term study confirmed that a tooth survival rate of 90% was achieved over a period of 13 years of functional loading and that clinical improvements were maintained at a rate of 82% for 11 years . Despite the development of efficient treatment modalities and innovative materials, periodontal tissue regeneration remains challenging. Although many clinical and epidemiological studies have confirmed that periodontal regenerative treatment shows better clinical and radiographic improvements compared to open flap debridement (OFD), long-term evidence of the benefits of periodontal regenerative treatment remain to be accumulated. Furthermore, there is limited long-term evidence to support the additional benefits of using deproteinized porcine bone mineral (DPBM) for periodontal defect regeneration. Therefore, the purpose of this cohort study was to evaluate the long-term clinical and radiographic outcomes and survival of teeth in periodontal regenerative treatment of intrabony defects using combined EMD and DPBM compared to EMD alone.
Ethics The study protocol was approved by the research ethics board at Daejeon Dental Hospital, Wonkwang University (approval No. W2208/003 − 001), and written informed consent was obtained from all patients before beginning of the study. The study was performed in accordance with the revised principles of the Helsinki Declaration and STROBE guidelines for the conduct and reporting of observational studies . All methods in this study were performed in accordance to relative guidelines and regulations. Patients In this retrospective cohort study, patients who underwent periodontal regenerative surgery with EMD with or without adjunctive use of DPBM between September 2016 and December 2020 at the Department of Periodontology, Daejeon Dental Hospital, Wonkwang University were screened and reviewed. The EMD alone group and the combined EMD and DPBM group were determined based on the additional cost of using bone graft substitutes and the patient’s personal choice. Inclusion criteria were: (1) age ≥ 19 years; (2) presence of intrabony defects treated with regenerative surgery; (3) 0 or 1 degree of tooth mobility before regenerative surgery; 4)stable periodontal status (full-mouth bleeding-on-probing and plaque scores < 25%); 5) systemically healthy or controlled medical condition; and 6) follow-up after periodontal surgery ≥ 2 years. Exclusion criteria were: (1) heavy smoking (≥ 20 cigarettes/day); (2) uncontrolled systemic diseases or periodontal conditions; (3) intrabony defects extending into the furcation region (grade II or III); and (4) no or irregular supportive periodontal treatment (SPT). The present cohort included 176 patients with 333 intrabony defects (mean 1.9 defects/patient), comprising 115 (65.3%) males and 61 (34.7%) females, with a mean age of 54.7 ± 8.9 (range, 25–80) years at T0. We found seven (4.0%) cases of diabetes mellitus, 30 (17.0%) cases of hypertension, 156 (88.6%) non-smokers, and 20 (11.4%) smokers with < 20 cigarettes/day. The mean follow-up duration was 58.6 ± 11.2 (range, 25–78) months. The distribution of defect morphology according to the number of walls showed a statistically significant difference between the compared two groups ( p = 0.003). Table shows the detailed baseline information. The intrabony defects were distributed as follows: maxillary anterior region, n = 30 (9.0%); maxillary premolar region, n = 56 (16.8%); maxillary molar region, n = 19 (5.7%); mandibular anterior region, n = 21 (6.3%); mandibular premolar region, n = 108 (32.4%); and mandibular molar region, n = 99 (29.7%) (Fig. .). Surgical regenerative procedure A board-certified periodontal specialist (J.H.L.) performed all surgeries. Under local anesthesia (2% lidocaine, 1:100,000 epinephrine), a full-thickness mucoperiosteal flap was minimally elevated to access the intrabony defect using simplified or modified papilla preservation techniques . Granulation tissues were removed, and the exposed tooth surfaces were scaled and planed with an ultrasonic scaler (SONICflex air scaler, KaVo, Biberach, Germany) and manual curettes (standard and mini Gracey curettes, Hu-Friedy, Chicago, USA). The debrided root surfaces were then conditioned with tetracycline hydrochloride at a concentration of 50 mg/mL for 2 min and rinsed with a sterile saline solution. Subsequently, adequate amount of EMD (Straumann Emdogain® 0.3 mL, Straumann, Basel, Switzerland) was applied to the hemostatic and dried tooth surface and defect site, with or without the adjunctive use of DPBM (deproteinized porcine bone mineral, THE Graft® 0.25 g, Purgo Biologics, Seongnam, Korea). In the combined EMD and DPBM group, the remaining EMD and DPBM were mixed and then the defect site was evenly filled with a condenser and any excess EMD and DPBM was removed. Tension-free flap closure was performed with interrupted (absorbable 6–0 Vicryl®, Johnson & Johnson, New Jersey, and non-absorbable 3–0 Biotex®, Purgo, Seongnam, Korea) and horizontal mattress (non-absorbable 4–0 Dafilon®, Braun Surgical, Tuttlingen, Germany, and non-absorbable 3–0 Biotex®) sutures.” Post-surgical procedure All treated patients received post-operative antibiotics (Amoxicillin®, Chongkundang Pharm, Seoul, Korea, amoxicillin 500 mg thrice daily) and analgesics (Brufen®, Samil Co, Seoul, Korea, ibuprofen 200 mg thrice daily) for 3–7 days and were instructed to rinse their mouths twice daily with 15 mL of 0.12% chlorhexidine digluconate (Hexamedine®, Bukwang Pharm, Seoul, Korea) for 1 min for 2 weeks. After 2 weeks since periodontal regeneration surgery, the sutures were removed, and the surgical site was cleansed with a sterile saline solution. For SPT, professional tooth cleaning, with provision of plaque control instructions, was performed every 3–6 months depending on the periodontal inflammatory status of each patient. Clinical and radiographic parameters Clinical and radiographic parameters were measured at the baseline (T0, preoperatively), 6-month follow-up (T1), and last follow-up (T2) after regenerative surgery for intrabony defects. Clinical parameters included the probing pocket depth (PPD), measured as the vertical distance between the gingival margin and the bottom of the periodontal pocket, and clinical attachment level (CAL), measured as the vertical distance between the cementoenamel junction and the bottom of the periodontal pocket. Radiographic parameters included defect depth (DD), measured as the vertical distance from the alveolar crest to the bottom of the bone defect, and defect width (DW), measured as the horizontal distance from the alveolar crest to the root surface. A single calibrated examiner who was not involved in the surgery recorded clinical and radiographic parameters using a periodontal probe (CP 15 UNC, Hu-Friedy, Chicago, IL, USA) and medical imaging software (Osirix X version 12.5.3, Pixmeo SARL, Geneva, Switzerland) (Fig. .). Statistical analysis Descriptive statistics are expressed as frequencies, proportions, mean, and standard deviation. An intra-examiner agreement test was performed to determine the reliability of the radiographic assessments. 10 cases were measured twice, and the intra-examiner correlation showed over 90% reproducibility by a single examiner who was not involved in the surgical procedures. The Shapiro–Wilk test was performed to assess normality of data distribution, and Levene’s test was used to assess the homogeneity of variances. Independent t -tests and paired t -tests were performed to identify significant differences in the clinical and radiographic parameters between and within the groups at T0, T1, and T2, respectively. Kaplan–Meier estimates were used to analyze the time to events for tooth loss over the observational period, and log-rank tests were conducted to compare survival curves of teeth treated with and without the adjunctive use of DPBM. The multivariate Cox proportional-hazards regression analysis adjusted for age, sex, smoking status, hypertension, diabetes mellitus, tooth position, defect morphology, and presence/absence of DPBM was used to assess the hazard ratio (HR) of the risk of tooth loss after periodontal regenerative surgery. All statistical analyses were conducted using statistical software (SPSS Statistics version 28.0, IBM Corp., Armonk, New York, and MedCalc version 20.114, Mariakerke, Belgium), and a p -value < 0.05 was considered to indicate statistical significance.
The study protocol was approved by the research ethics board at Daejeon Dental Hospital, Wonkwang University (approval No. W2208/003 − 001), and written informed consent was obtained from all patients before beginning of the study. The study was performed in accordance with the revised principles of the Helsinki Declaration and STROBE guidelines for the conduct and reporting of observational studies . All methods in this study were performed in accordance to relative guidelines and regulations.
In this retrospective cohort study, patients who underwent periodontal regenerative surgery with EMD with or without adjunctive use of DPBM between September 2016 and December 2020 at the Department of Periodontology, Daejeon Dental Hospital, Wonkwang University were screened and reviewed. The EMD alone group and the combined EMD and DPBM group were determined based on the additional cost of using bone graft substitutes and the patient’s personal choice. Inclusion criteria were: (1) age ≥ 19 years; (2) presence of intrabony defects treated with regenerative surgery; (3) 0 or 1 degree of tooth mobility before regenerative surgery; 4)stable periodontal status (full-mouth bleeding-on-probing and plaque scores < 25%); 5) systemically healthy or controlled medical condition; and 6) follow-up after periodontal surgery ≥ 2 years. Exclusion criteria were: (1) heavy smoking (≥ 20 cigarettes/day); (2) uncontrolled systemic diseases or periodontal conditions; (3) intrabony defects extending into the furcation region (grade II or III); and (4) no or irregular supportive periodontal treatment (SPT). The present cohort included 176 patients with 333 intrabony defects (mean 1.9 defects/patient), comprising 115 (65.3%) males and 61 (34.7%) females, with a mean age of 54.7 ± 8.9 (range, 25–80) years at T0. We found seven (4.0%) cases of diabetes mellitus, 30 (17.0%) cases of hypertension, 156 (88.6%) non-smokers, and 20 (11.4%) smokers with < 20 cigarettes/day. The mean follow-up duration was 58.6 ± 11.2 (range, 25–78) months. The distribution of defect morphology according to the number of walls showed a statistically significant difference between the compared two groups ( p = 0.003). Table shows the detailed baseline information. The intrabony defects were distributed as follows: maxillary anterior region, n = 30 (9.0%); maxillary premolar region, n = 56 (16.8%); maxillary molar region, n = 19 (5.7%); mandibular anterior region, n = 21 (6.3%); mandibular premolar region, n = 108 (32.4%); and mandibular molar region, n = 99 (29.7%) (Fig. .).
A board-certified periodontal specialist (J.H.L.) performed all surgeries. Under local anesthesia (2% lidocaine, 1:100,000 epinephrine), a full-thickness mucoperiosteal flap was minimally elevated to access the intrabony defect using simplified or modified papilla preservation techniques . Granulation tissues were removed, and the exposed tooth surfaces were scaled and planed with an ultrasonic scaler (SONICflex air scaler, KaVo, Biberach, Germany) and manual curettes (standard and mini Gracey curettes, Hu-Friedy, Chicago, USA). The debrided root surfaces were then conditioned with tetracycline hydrochloride at a concentration of 50 mg/mL for 2 min and rinsed with a sterile saline solution. Subsequently, adequate amount of EMD (Straumann Emdogain® 0.3 mL, Straumann, Basel, Switzerland) was applied to the hemostatic and dried tooth surface and defect site, with or without the adjunctive use of DPBM (deproteinized porcine bone mineral, THE Graft® 0.25 g, Purgo Biologics, Seongnam, Korea). In the combined EMD and DPBM group, the remaining EMD and DPBM were mixed and then the defect site was evenly filled with a condenser and any excess EMD and DPBM was removed. Tension-free flap closure was performed with interrupted (absorbable 6–0 Vicryl®, Johnson & Johnson, New Jersey, and non-absorbable 3–0 Biotex®, Purgo, Seongnam, Korea) and horizontal mattress (non-absorbable 4–0 Dafilon®, Braun Surgical, Tuttlingen, Germany, and non-absorbable 3–0 Biotex®) sutures.”
All treated patients received post-operative antibiotics (Amoxicillin®, Chongkundang Pharm, Seoul, Korea, amoxicillin 500 mg thrice daily) and analgesics (Brufen®, Samil Co, Seoul, Korea, ibuprofen 200 mg thrice daily) for 3–7 days and were instructed to rinse their mouths twice daily with 15 mL of 0.12% chlorhexidine digluconate (Hexamedine®, Bukwang Pharm, Seoul, Korea) for 1 min for 2 weeks. After 2 weeks since periodontal regeneration surgery, the sutures were removed, and the surgical site was cleansed with a sterile saline solution. For SPT, professional tooth cleaning, with provision of plaque control instructions, was performed every 3–6 months depending on the periodontal inflammatory status of each patient.
Clinical and radiographic parameters were measured at the baseline (T0, preoperatively), 6-month follow-up (T1), and last follow-up (T2) after regenerative surgery for intrabony defects. Clinical parameters included the probing pocket depth (PPD), measured as the vertical distance between the gingival margin and the bottom of the periodontal pocket, and clinical attachment level (CAL), measured as the vertical distance between the cementoenamel junction and the bottom of the periodontal pocket. Radiographic parameters included defect depth (DD), measured as the vertical distance from the alveolar crest to the bottom of the bone defect, and defect width (DW), measured as the horizontal distance from the alveolar crest to the root surface. A single calibrated examiner who was not involved in the surgery recorded clinical and radiographic parameters using a periodontal probe (CP 15 UNC, Hu-Friedy, Chicago, IL, USA) and medical imaging software (Osirix X version 12.5.3, Pixmeo SARL, Geneva, Switzerland) (Fig. .).
Descriptive statistics are expressed as frequencies, proportions, mean, and standard deviation. An intra-examiner agreement test was performed to determine the reliability of the radiographic assessments. 10 cases were measured twice, and the intra-examiner correlation showed over 90% reproducibility by a single examiner who was not involved in the surgical procedures. The Shapiro–Wilk test was performed to assess normality of data distribution, and Levene’s test was used to assess the homogeneity of variances. Independent t -tests and paired t -tests were performed to identify significant differences in the clinical and radiographic parameters between and within the groups at T0, T1, and T2, respectively. Kaplan–Meier estimates were used to analyze the time to events for tooth loss over the observational period, and log-rank tests were conducted to compare survival curves of teeth treated with and without the adjunctive use of DPBM. The multivariate Cox proportional-hazards regression analysis adjusted for age, sex, smoking status, hypertension, diabetes mellitus, tooth position, defect morphology, and presence/absence of DPBM was used to assess the hazard ratio (HR) of the risk of tooth loss after periodontal regenerative surgery. All statistical analyses were conducted using statistical software (SPSS Statistics version 28.0, IBM Corp., Armonk, New York, and MedCalc version 20.114, Mariakerke, Belgium), and a p -value < 0.05 was considered to indicate statistical significance.
Clinical and radiographic outcomes In the combined EMD and DPBM group, the mean PPD and CAL changed significantly from 7.9 ± 1.9 mm at T0 to 5.2 ± 1.6 mm at T2 (mean difference [MD]: -2.8 ± 1.8 mm, p < 0.001) and 8.5 ± 2.1 mm at T0 to 5.8 ± 2.1 mm at T2 (MD: -2.8 ± 2.3 mm, p < 0.001), respectively. In the EMD alone group, the mean PPD and CAL changed significantly from 7.6 ± 1.5 mm at T0 to 5.3 ± 1.5 mm at T2 (MD: -2.3 ± 1.8 mm, p < 0.001) and 8.1 ± 1.9 mm at T0 to 5.9 ± 2.1 mm at T2 (MD: -2.2 ± 2.2 mm, p < 0.001), respectively. In the combined EMD and DPBM group, the mean DD and DW reduced significantly from 6.8 ± 2.6 mm at T0 to 4.3 ± 2.1 mm at T2 (MD: -2.5 ± 2.4 mm, p < 0.001) and 1.7 ± 1.0 mm at T0 to 1.1 ± 0.9 mm at T2 (MD: -0.6 ± 1.0 mm, p < 0.001), respectively. In the EMD alone group, the mean DD and DW reduced significantly from 6.6 ± 2.4 mm at T0 to 4.6 ± 1.9 mm at T2 (MD: -2.0 ± 2.4 mm, p < 0.001) and 1.7 ± 1.2 mm at T0 to 1.5 ± 1.2 mm at T2 (MD: -0.2 ± 1.3 mm, p = 0.093), respectively. Compared to periodontal surgery with EMD alone with a mean follow-up of 5 years, combined EMD and DPBM showed significantly better gain in CAL (EMD and DPBM: 2.8 ± 2.3 mm vs. EMD alone: 2.2 ± 2.2 mm, p = 0.019) and reduction in PPD (EMD and DPBM: 2.8 ± 1.8 mm vs. EMD alone: 2.3 ± 1.8 mm, p = 0.028), DD (EMD and DPBM: 2.5 ± 2.4 mm vs. EMD alone: 2.0 ± 2.4 mm, p = 0.040) and DW (EMD and DPBM: 0.6 ± 1.0 mm vs. EMD alone: 0.2 ± 1.3 mm, p = 0.007). Table ; Fig. . provide detailed clinical and radiographic outcomes at T0, T1, and T2. Tooth survival outcomes A total of 16 teeth of nine (56.3%) male and seven (43.8%) female, with a mean age of 56.3 ± 7.8 (range, 42–68) years, were lost due to severe mobility, recurrence of pain, and signs of infection during the follow-up period. Most ( n = 10, 62.5%) teeth were lost in the one-wall intrabony defect, followed by two-wall ( n = 4, 25.0%) and three-wall ( n = 2, 12.5%) intrabony defects. The mean follow-up time until tooth loss was 58.2 ± 10.0 (range, 37–74) months (Table ). The overall survival rate of teeth did not differ between the compared two groups. At the end of the study period, the survival rates of the teeth were 91.48% and 95.20% in the patient- and tooth-based analyses, respectively. Figure . shows the Kaplan–Meier estimates of tooth survival. The multivariate Cox proportional-hazards regression analysis for tooth loss after adjusting for age, sex, smoking status, hypertension, diabetes mellitus, tooth position, defect morphology, and DPBM use showed that tooth loss after periodontal regenerative treatment had a significant positive association with diabetes mellitus (reference: no diabetes mellitus, HR = 44.57, p = 0.003), the maxillary molar region (reference: maxillary anterior region, HR = 13.08, p = 0.022), and one-wall intrabony defects (reference: three-wall intrabony defect, HR = 18.73, p = 0.002; Table ).
In the combined EMD and DPBM group, the mean PPD and CAL changed significantly from 7.9 ± 1.9 mm at T0 to 5.2 ± 1.6 mm at T2 (mean difference [MD]: -2.8 ± 1.8 mm, p < 0.001) and 8.5 ± 2.1 mm at T0 to 5.8 ± 2.1 mm at T2 (MD: -2.8 ± 2.3 mm, p < 0.001), respectively. In the EMD alone group, the mean PPD and CAL changed significantly from 7.6 ± 1.5 mm at T0 to 5.3 ± 1.5 mm at T2 (MD: -2.3 ± 1.8 mm, p < 0.001) and 8.1 ± 1.9 mm at T0 to 5.9 ± 2.1 mm at T2 (MD: -2.2 ± 2.2 mm, p < 0.001), respectively. In the combined EMD and DPBM group, the mean DD and DW reduced significantly from 6.8 ± 2.6 mm at T0 to 4.3 ± 2.1 mm at T2 (MD: -2.5 ± 2.4 mm, p < 0.001) and 1.7 ± 1.0 mm at T0 to 1.1 ± 0.9 mm at T2 (MD: -0.6 ± 1.0 mm, p < 0.001), respectively. In the EMD alone group, the mean DD and DW reduced significantly from 6.6 ± 2.4 mm at T0 to 4.6 ± 1.9 mm at T2 (MD: -2.0 ± 2.4 mm, p < 0.001) and 1.7 ± 1.2 mm at T0 to 1.5 ± 1.2 mm at T2 (MD: -0.2 ± 1.3 mm, p = 0.093), respectively. Compared to periodontal surgery with EMD alone with a mean follow-up of 5 years, combined EMD and DPBM showed significantly better gain in CAL (EMD and DPBM: 2.8 ± 2.3 mm vs. EMD alone: 2.2 ± 2.2 mm, p = 0.019) and reduction in PPD (EMD and DPBM: 2.8 ± 1.8 mm vs. EMD alone: 2.3 ± 1.8 mm, p = 0.028), DD (EMD and DPBM: 2.5 ± 2.4 mm vs. EMD alone: 2.0 ± 2.4 mm, p = 0.040) and DW (EMD and DPBM: 0.6 ± 1.0 mm vs. EMD alone: 0.2 ± 1.3 mm, p = 0.007). Table ; Fig. . provide detailed clinical and radiographic outcomes at T0, T1, and T2.
A total of 16 teeth of nine (56.3%) male and seven (43.8%) female, with a mean age of 56.3 ± 7.8 (range, 42–68) years, were lost due to severe mobility, recurrence of pain, and signs of infection during the follow-up period. Most ( n = 10, 62.5%) teeth were lost in the one-wall intrabony defect, followed by two-wall ( n = 4, 25.0%) and three-wall ( n = 2, 12.5%) intrabony defects. The mean follow-up time until tooth loss was 58.2 ± 10.0 (range, 37–74) months (Table ). The overall survival rate of teeth did not differ between the compared two groups. At the end of the study period, the survival rates of the teeth were 91.48% and 95.20% in the patient- and tooth-based analyses, respectively. Figure . shows the Kaplan–Meier estimates of tooth survival. The multivariate Cox proportional-hazards regression analysis for tooth loss after adjusting for age, sex, smoking status, hypertension, diabetes mellitus, tooth position, defect morphology, and DPBM use showed that tooth loss after periodontal regenerative treatment had a significant positive association with diabetes mellitus (reference: no diabetes mellitus, HR = 44.57, p = 0.003), the maxillary molar region (reference: maxillary anterior region, HR = 13.08, p = 0.022), and one-wall intrabony defects (reference: three-wall intrabony defect, HR = 18.73, p = 0.002; Table ).
The objective of this cohort study was to evaluate the long-term clinical and radiographic outcomes of periodontal regenerative treatment for intrabony defects using EMD with and without DPBM. Combined EMD and DPBM showed significantly better clinical and radiographic outcomes, consistent with previous studies demonstrating that combined EMD and bone grafting improves the regeneration of periodontal intrabony and furcation defects . However, although most clinical studies has reported that regenerative therapy is a highly promising treatment strategy for periodontal defects compared to OFD, no clear consensus on the superiority or inferiority relationship among different regenerative treatment modalities has been reached . Periodontal regeneration surgery with EMD has additional benefits compared to OFD in treating intraosseous defects [ – ]. In a recent cohort study, periodontal regenerative surgery with EMD showed significantly changed the mean PPD and CAL from 6.71 ± 1.22 to 3.75 ± 1.41 mm ( p < 0.001) and 8.43 ± 1.86 to 5.81 ± 1.83 mm ( p < 0.001), respectively, and achieved a tooth survival rate of 90.7% over a mean observation period of 10.3 years . In another longitudinal meta-analysis, the relative clinical value of periodontal regeneration therapies, including EMD and GTR, compared to OFD sustained up to 5–10 years . Moreover, during the long follow-up period, clinical parameters, including PPD and CAL, did not differ statistically between the EMD and GTR groups . In a previous systematic review and meta-analysis, combined EMD and bone grafts provided additional clinical benefits in terms of PD reduction (EMD and bone grafts: 4.22 ± 1.20 mm vs. EMD alone: 4.12 ± 1.07 mm) and CAL gain (EMD and bone grafts: 3.76 ± 1.07 mm vs. EMD alone: 3.32 ± 1.04 mm) compared to EMD alone. However, in another recent meta-analytic review, combined EMD and bone grafts showed no statistically significant improvement in terms of PD reduction (standard difference in means [SDM]: -0.43 mm, p = 0.06) or CAL gain (SDM: -0.34 mm, p = 0.12) compared to EMD alone . Various patient- and tooth-related factors can significantly influence tooth loss following active periodontal treatment . The oral hygiene status during SPT (risk ratio [RR] = 1.58, p < 0.001), irregular SPT (RR = 3.17, p < 0.001), initial diagnosis of periodontitis (RR = 2.33, p < 0.001), age (RR = 1.05, p < 0.001), smoking (RR = 1.80, p < 0.05), and sex (RR = 1.45, p < 0.05) were patient-related risk factors, and the baseline bone loss (odds ratio [OR] = 1.05, p < 0.001), furcation involvement (OR = 1.80, p < 0.05), and abutment tooth (OR = 1.80, p < 0.05) were tooth-related risk factors significantly contributing to tooth loss (tooth based survival rate: 93.26%) in Poisson and logistic multilevel regression analyses over 10 years . Within the limited research findings, compared to periodontal surgery with EMD alone, with a mean follow-up of 5 years, combined EMD and DPBM showed statistically significant better gains in CAL and reductions in PPD, DD and DW. However, the overall clinical outcomes showed that not only PPD but also CAL improved, but the results still showed mean PPD and CAL values greater than 5 mm. These results may be due to the fact that the analysis of this study included not only three-wall defects, but also non-contained one-wall and/or furcation defects and maxillary molar regions with a poor prognosis. In the present study, the 5-year overall survival rates of the teeth were 91.48% and 95.20% in the patient- and tooth-based analyses, respectively, showing no statistically significant difference between the two compared groups in terms of the patient- or tooth-based survival rate. We also found that the loss of teeth showed a statistically significant association with diabetes mellitus (HR = 44.57, p = 0.003), the maxillary molar position (HR = 13.08, p = 0.022), and one-wall intrabony defects (HR = 18.73, p = 0.002), after adjusting for patient- and tooth-related confounding variables. In addition to local risk factors, such as the tooth position, root divergence, and abutment tooth for fixed or partial dental prostheses, directly related to tooth loss, diabetes mellitus is a major risk factor for periodontal disease and tooth loss, and biologically plausible underlying mechanisms have been proposed [ , , – ]. Moreover, recent systematic reviews have reported a consistently high risk for complications of diabetes mellitus, such as diabetic retinopathy (OR = 2.8–8.7), neuropathy (OR = 3.2–6.6), nephropathy (OR = 1.9–8.5), cardiovascular complications (OR = 1.28–17.7), and mortality (OR = 2.3–8.5), in the presence of periodontal disease . Therefore, in determining the outcomes of periodontal regenerative treatment, local tooth-related factors and the presence and control of diabetes mellitus must be considered. In addition to intrabony defect morphology, the degree of furcation involvement is a major risk factor influencing the long-term outcomes and tooth mortality . According to a meta-analysis, the relative risk of tooth loss due to the presence of furcation involvement is 2.21 (95% confidence interval = 1.79–2.74, p < 0.001) up to 15 years of follow-up . In a recent long-term retrospective cohort study, 37% of teeth with class III furcation involvement were lost over an average of 9 years after active periodontal treatment . Therefore, the treatment sites included in this study were selected with the aim to minimize the effect of tooth loss due to furcation involvement by limiting intrabony defects associated with furcation involvement grade I. Although significant clinical and radiographic improvements were observed with combined EMD and DPBM compared to EMD alone, the results of the present study should be interpreted with caution. First, although efforts have been undertaken to standardize treatment approaches, current study has an inherent limitation of a retrospective observational study design. Second, due to heterogeneity within or between groups, selective and informative biases should be considered when interpreting the findings. Third, as no calibrated or standardized methods were used for the radiographic measurements, caution should be taken when interpreting the reproducibility of the measurements. Fourth, previous studies have reported clinical benefits of the adjunctive use of EMD in periodontal surgery in reducing post-operative pain, swelling, and soft tissue wound healing, but the relevant parameters were not measured in this study. Furthermore, the lack of a negative control group receiving only OFD is a major limitation. Therefore, further well-designed and bias-controlled clinical trials are required to apply our current findings in clinical practice and draw reliable conclusions.
Within the aforementioned limitations, the results of this study indicated that combined EMD and DPBM may result in significant additional clinical and radiographic improvements in terms of PPD, CAL, DD, and DW compared to EMD alone over a mean follow-up of 5 years. However, tooth loss did not differ significantly between the two compared groups. Further well-controlled prospective trials of long-term outcomes are necessary to confirm our findings.
|
Early activity after strong sutures helps to tendon healing in a rat tendon rupture model | 5487daf2-bfe5-4234-9cff-418fea3fae05 | 11695945 | Suturing[mh] | With the increase of age and the popularization of sports, tendon injury has gradually become an important factor affecting people’s health , . Tendon is a special connective tissue characterized by low cell and blood vessel density, resulting in challenging healing processes . The tendon healing process can be categorized into endogenous and exogenous healing based on the origin of cells involved . The process of exogenous healing involves the proliferation of fibroblasts around the tendon, which then grow into the broken end of the tendon and ultimately form scar tissue . As a result, exogenous healing inevitably leads to tendon adhesion . Therefore, how to promote tendon healing and reduce the formation of scar adhesion is a problem that needs to be solved in clinical work. A number of recent studies have shown that early activity can increase the strength of tendon healing – . Silva Barreto et al., using micro- and nanostructure specific X-ray tomography in a rat model, found delayed and more disorganized regeneration of tendon fibers in fully immobilized rats . In a study by Zhi Li et al., dynamic tensile stress was found to promote tendon healing through the integrin/FAK/ERK signaling pathway, tendon healing length and failure load were significantly lower in postoperatively fixed mice than in non-fixed mice . In addition, early return to activity is thought to reduce adhesion and increase joint motion , . However, a recent meta-analysis has shown that returning to activity immediately after surgery significantly increases the risk of tendon re-rupture . In clinical practice, surgeons prefer to stick with conservative fixed timing to avoid medical disputes. These factors make it difficult for patients with tendon rupture in clinical work to return to activity early. We hypothesized that early return of activity with a low incidence of tendon re-rupture could be achieved with strong suturing. The rat Achilles tendon injury model is widely utilized in the investigation of tendon injuries, providing a convenient and effective approach to comprehend the mechanisms and developmental patterns of such injuries . The utilization of animal models for investigating tendon injuries allows for the control of injury type and the development of consistent surgical and rehabilitation protocols. Therefore, this study intends to apply relatively strong suture mode in the rat Achilles tendon rupture model to explore the influence of different time to return to activity on tendon healing.
Study design and surgical procedure This study has been approved by the Medical Ethical Committee of the Hebei Medical University Third Hospital and is performed in accordance with relevant guidelines and regulations. All methods are reported in accordance with ARRIVE guidelines. Eighty 10-week-old male Sprague-Dawley rats (weight 300–350 g) provided by Shandong Hengrong Biotechnology Co. LTD. were used in this study. The rats were kept in separate cages at a controlled temperature (21°± 2℃) under a 12-h light-dark cycle with ad libitum access to food and water. Rats were acclimated to the new environment for 7 days before starting the experiment. The rats were anesthetized by intraperitoneal injection of pentobarbital sodium (40 mg/kg). Cefazolin sodium (10 mg/kg) is administered internally to prevent infection. Disinfect left leg with povidone-iodine solution after shaving. Incise the skin about 2 cm along the long axis, expose the Achilles tendon and blunt transection it at the midpoint. The ends of the Achilles tendon were sutured together using the Double Kessler method (Prolene 4–0). After disinfection again and cleaning the wound, use 1 − 0 thread to suture the skin incision (Fig. ). The rats were randomly divided into 4 groups: non-fixed (NF) group, fixed one week (F-1 W) group, fixed two weeks (F-2 W) group and fixed three weeks (F-3 W) group. Each group consisted of 20 rats. Polymer splints were used to fix the ankle joint in plantarflexion except in the NF group. Subsequently, the rats were kept in cages for three weeks. The F-2 W and F-3 W groups had their splints changed weekly until the end of fixation. In the fourth week, all rats were trained on a treadmill for one hour a day at a speed of 10 m/min 9 . For the F-3 W group of rats, treadmill training was scheduled to begin the day after the splints was removed. The complications of rats were observed and recorded daily. The rats were euthanized after seven days of treadmill training by injecting potassium chloride (1.5 mg/kg) under deep anesthesia. Gross observation The healing of the Achilles tendon was evaluated according to Tang’s grading method ( n = 19–20/group) . The specific classification is as follows: (i) No adhesions: There are no adhesions around the tendon, but some granulation tissue may be present; (ii) Membranous adhesions: only a few membranous adhesions have no effect on tendon gliding; (iii) Loose adhesions: These are thin, loose, soft fibers and tendons that are easily separated; (iv) Moderately dense adhesions: moderate texture with some tendon mobility; (v) Severe extensive adhesion: poor mobility and no boundary between the tendon and the peritendinous tissue. Passive ankle motion The lower limb of the rats were dissected and fixed on the operating table to keep the knee joint extended. Tie a suture 1 cm away from the rat’s ankle joint. After ensuring that the suture is tight, pull the ankle joint with different weights of blocks. The weight was increased from 15 g, 25 g, to 35 g, and the dorsiflexion and plantarflexion were measured once each. A digital camera was used to take pictures and record the range of motion of the ankle ( n = 19–20/group). Biomechanical analysis The adherent tissue around the Achilles tendon is separated, secure both ends of the tendon using a specialized aluminum fixture with sandpaper ( n = 9–10/group). The whole construct was then mounted onto the biomechanical testing machine (Electroforce 3230, US). The tensile test was conducted using a device set at a speed of 0.1 mm/sec and an initial load of 0.2 N . The maximum load (N) and stiffness (N/mm) values at the breaking point of the Achilles tendons were recorded. Stiffness values were calculated by dividing the maximum load values by the amount of elongation at the breaking point of the Achilles tendons. Histological staining Tendon tissue samples were fixed using 10% neutral formaldehyde solution and kept in 5% formic acid ( n = 10/ group). Following histopathological preparation processes, the specimens were embedded in paraffin blocks and sectioned. The sections were stained with hematoxylin and eosin (H&E) (Abcam, Cambridge, UK) and Masson’s trichrome (BIOGNOST, Zagreb, Crotia). The number, morphology and collagen arrangement of fibroblasts were observed under a microscope. Six well-stained fields were randomly selected under a 200-fold light microscope, and the number of fibroblasts in each field was calculated using Image Pro Plus 6.0, and the results were analyzed. Bonar’s semi-quantitative score grading scale were used for evaluation. Bonar’s scale includes the analysis of the following components: (i) tenocytes, (ii) ground substance, (iii) collagen, and (iv) vascularity. Each variable was scored on a 4-point scale of 0–3 as follows: 0, normal; 1, slightly abnormal; 2, abnormal; and 3, markedly abnormal. The samples were assessed for the presence of significant abnormality, with a total score ranging from 0 (normal tendon) to 12 (most severe abnormality) . Immunohistochemistry The same tissue specimens from histology were utilized ( n = 10/group). The paraffin block is longitudinally sectioned, and stained with TGF-β1, anti-type I collagen, and anti-type III collagen antibodies (Wuhan Yunkron, China). After dewaxing in xylene, the sections were dehydrated with ethanol. They were then incubated with 0.5% trypsin at 37 °C for 15 min and endogenous peroxidase activity was inhibited using hydrogen peroxide. Blocking serum was applied for 1 h, followed by incubation with primary antibodies at 4 °C overnight. The sections were then treated with the antimouse biotin-streptavidin hydrogen peroxidase secondary antibody. DAB staining solution was applied, observed under microscope for 2–5 min until the cell base color turned brown, and then restained with hematoxylin staining solution for 10s. Image Pro Plus 6.0 was used for quantitative analysis. The integrated option density (IOD) and area value of each image are measured, and then the mean density (MD = IOD/area) is calculated. Statistical analyses We performed all statistical analyses using the Statistical Package for Social Sciences (SPSS) 26.0 (IBM Corporation, Armonk, New York, USA). In descriptive analysis, means and standard deviations were used for continuous variables and frequencies as well as percentages were used for categorical variables. One-way ANOVA analysis of variance was used for comparison multiple groups, and the LSD-t test was used for pairwise comparisons. P < 0.05 was considered to indicate a statistically significant difference.
This study has been approved by the Medical Ethical Committee of the Hebei Medical University Third Hospital and is performed in accordance with relevant guidelines and regulations. All methods are reported in accordance with ARRIVE guidelines. Eighty 10-week-old male Sprague-Dawley rats (weight 300–350 g) provided by Shandong Hengrong Biotechnology Co. LTD. were used in this study. The rats were kept in separate cages at a controlled temperature (21°± 2℃) under a 12-h light-dark cycle with ad libitum access to food and water. Rats were acclimated to the new environment for 7 days before starting the experiment. The rats were anesthetized by intraperitoneal injection of pentobarbital sodium (40 mg/kg). Cefazolin sodium (10 mg/kg) is administered internally to prevent infection. Disinfect left leg with povidone-iodine solution after shaving. Incise the skin about 2 cm along the long axis, expose the Achilles tendon and blunt transection it at the midpoint. The ends of the Achilles tendon were sutured together using the Double Kessler method (Prolene 4–0). After disinfection again and cleaning the wound, use 1 − 0 thread to suture the skin incision (Fig. ). The rats were randomly divided into 4 groups: non-fixed (NF) group, fixed one week (F-1 W) group, fixed two weeks (F-2 W) group and fixed three weeks (F-3 W) group. Each group consisted of 20 rats. Polymer splints were used to fix the ankle joint in plantarflexion except in the NF group. Subsequently, the rats were kept in cages for three weeks. The F-2 W and F-3 W groups had their splints changed weekly until the end of fixation. In the fourth week, all rats were trained on a treadmill for one hour a day at a speed of 10 m/min 9 . For the F-3 W group of rats, treadmill training was scheduled to begin the day after the splints was removed. The complications of rats were observed and recorded daily. The rats were euthanized after seven days of treadmill training by injecting potassium chloride (1.5 mg/kg) under deep anesthesia.
The healing of the Achilles tendon was evaluated according to Tang’s grading method ( n = 19–20/group) . The specific classification is as follows: (i) No adhesions: There are no adhesions around the tendon, but some granulation tissue may be present; (ii) Membranous adhesions: only a few membranous adhesions have no effect on tendon gliding; (iii) Loose adhesions: These are thin, loose, soft fibers and tendons that are easily separated; (iv) Moderately dense adhesions: moderate texture with some tendon mobility; (v) Severe extensive adhesion: poor mobility and no boundary between the tendon and the peritendinous tissue.
The lower limb of the rats were dissected and fixed on the operating table to keep the knee joint extended. Tie a suture 1 cm away from the rat’s ankle joint. After ensuring that the suture is tight, pull the ankle joint with different weights of blocks. The weight was increased from 15 g, 25 g, to 35 g, and the dorsiflexion and plantarflexion were measured once each. A digital camera was used to take pictures and record the range of motion of the ankle ( n = 19–20/group).
The adherent tissue around the Achilles tendon is separated, secure both ends of the tendon using a specialized aluminum fixture with sandpaper ( n = 9–10/group). The whole construct was then mounted onto the biomechanical testing machine (Electroforce 3230, US). The tensile test was conducted using a device set at a speed of 0.1 mm/sec and an initial load of 0.2 N . The maximum load (N) and stiffness (N/mm) values at the breaking point of the Achilles tendons were recorded. Stiffness values were calculated by dividing the maximum load values by the amount of elongation at the breaking point of the Achilles tendons.
Tendon tissue samples were fixed using 10% neutral formaldehyde solution and kept in 5% formic acid ( n = 10/ group). Following histopathological preparation processes, the specimens were embedded in paraffin blocks and sectioned. The sections were stained with hematoxylin and eosin (H&E) (Abcam, Cambridge, UK) and Masson’s trichrome (BIOGNOST, Zagreb, Crotia). The number, morphology and collagen arrangement of fibroblasts were observed under a microscope. Six well-stained fields were randomly selected under a 200-fold light microscope, and the number of fibroblasts in each field was calculated using Image Pro Plus 6.0, and the results were analyzed. Bonar’s semi-quantitative score grading scale were used for evaluation. Bonar’s scale includes the analysis of the following components: (i) tenocytes, (ii) ground substance, (iii) collagen, and (iv) vascularity. Each variable was scored on a 4-point scale of 0–3 as follows: 0, normal; 1, slightly abnormal; 2, abnormal; and 3, markedly abnormal. The samples were assessed for the presence of significant abnormality, with a total score ranging from 0 (normal tendon) to 12 (most severe abnormality) .
The same tissue specimens from histology were utilized ( n = 10/group). The paraffin block is longitudinally sectioned, and stained with TGF-β1, anti-type I collagen, and anti-type III collagen antibodies (Wuhan Yunkron, China). After dewaxing in xylene, the sections were dehydrated with ethanol. They were then incubated with 0.5% trypsin at 37 °C for 15 min and endogenous peroxidase activity was inhibited using hydrogen peroxide. Blocking serum was applied for 1 h, followed by incubation with primary antibodies at 4 °C overnight. The sections were then treated with the antimouse biotin-streptavidin hydrogen peroxidase secondary antibody. DAB staining solution was applied, observed under microscope for 2–5 min until the cell base color turned brown, and then restained with hematoxylin staining solution for 10s. Image Pro Plus 6.0 was used for quantitative analysis. The integrated option density (IOD) and area value of each image are measured, and then the mean density (MD = IOD/area) is calculated.
We performed all statistical analyses using the Statistical Package for Social Sciences (SPSS) 26.0 (IBM Corporation, Armonk, New York, USA). In descriptive analysis, means and standard deviations were used for continuous variables and frequencies as well as percentages were used for categorical variables. One-way ANOVA analysis of variance was used for comparison multiple groups, and the LSD-t test was used for pairwise comparisons. P < 0.05 was considered to indicate a statistically significant difference.
Postoperative complications A total of 5 splints loss during fixation, including 2 cases in the F-1 W group, 2 cases in the F-2 W group, and 1 case in the F-3 W group. All splint loss occurred 5 to 9 days after surgery. We reinstalled the splint within 24 h after it was lost. One case of skin necrosis in the F-3 W group occurred on the seventh day after surgery. There were 4 cases of incisional infection, 2 cases in NF group and 1 case each in F-2 W and F-3 W group. No rat deaths occurred during the experiment. There was no difference in complication rate among the groups ( P> 0.05) (Table ). Gross observation The tendon was examined for re-rupture after euthanasia. In the NF group and the F-1 W group, there was one case of re-rupture of the Achilles tendon each. In the F-1 W group, the rat that experienced re-rupture also had the splint fall off. Both rats that experienced re-rupture were not included in any further experiments. By assessing the degree of tendon adhesion, it was observed that the degree of adhesion in each group gradually increased with the extension of splint fixation time. The adhesion degree of F-2 W was significantly higher than that of NF and F-1 W groups ( P <0.05). Additionally, the adhesion degree of F-3 W group was significantly higher than that of the other three groups ( P <0.005) (Table ). Passive ankle motion The plantarflexion (Fig. ) and dorsiflexion (Fig. ) motion of NF group were significantly better than those of the other three groups under different weights ( P <0.001). The plantarflexion and dorsiflexion motion of F-3 W group were significantly lower than those of the other three groups under different weights ( P <0.001). Particularly, the ankle joint of the F-3 W group showed nearly no movement at a weight of 15 g. Although there was no statistical difference between F-1 W and F-2 W dorsiflexion range of motion ( P >0.05), overall ankle motion decreased with longer braking time. Biomechanical evaluation In comparison of peak load among the groups, the peak load of the F-3 W group was significantly lower than the other three groups ( P <0.01). The stiffness of the F-3 W group was 21.09 ± 2.91 N/mm, which was significantly greater than that of the NF group at 18.56 ± 2.22 N/mm ( P <0.05). The peak load and stiffness of NF group, F-1 W group and F-2 W group were not statistically significant (Fig. ). Histological observation HE staining revealed that the regenerated collagen fibers in the F-3 W group exhibited lower density and organization compared to the other three groups. Furthermore, the F-3 W group displayed heightened vascularization and accumulation of inflammatory cells. Additionally, Masson’s trichrome staining indicated a decreased collagen fiber density in the F-3 W group relative to the other three groups. Image Pro Plus 6.0 image analysis software showed that the number of fibroblasts in the F-3 W group was significantly higher than in the other groups ( P <0.001). Bonar score was used to quantitatively evaluate the quality of regenerated tissue, and the scores of NF and F-1 W groups were significantly lower than those of the other two groups ( P < 0.05) (Fig. ). Immunohistochemistry Col-1 expression in NF group was higher than that in F-2 W and F-3 W groups ( P < 0.05), and Col-3 expression in F-3 W group was higher than that in NF and F-1 W groups ( P < 0.05). The ratio of Col-1:3 in the NF group was higher than in the other three groups ( P < 0.05), and the ratio of Col-1:3 in the F-1 W group was higher than in the F-3 W group ( P < 0.05). The expression of TGF-β1 was lower in the NF group than in the other three groups, while the expression of TGF-β1 was higher in the F-3 W group than in the other three groups ( P < 0.05). It is worth noting that there was no statistically significant difference in the immunohistochemical results between F-1 W and F-2 W ( P > 0.05) (Fig. ).
A total of 5 splints loss during fixation, including 2 cases in the F-1 W group, 2 cases in the F-2 W group, and 1 case in the F-3 W group. All splint loss occurred 5 to 9 days after surgery. We reinstalled the splint within 24 h after it was lost. One case of skin necrosis in the F-3 W group occurred on the seventh day after surgery. There were 4 cases of incisional infection, 2 cases in NF group and 1 case each in F-2 W and F-3 W group. No rat deaths occurred during the experiment. There was no difference in complication rate among the groups ( P> 0.05) (Table ).
The tendon was examined for re-rupture after euthanasia. In the NF group and the F-1 W group, there was one case of re-rupture of the Achilles tendon each. In the F-1 W group, the rat that experienced re-rupture also had the splint fall off. Both rats that experienced re-rupture were not included in any further experiments. By assessing the degree of tendon adhesion, it was observed that the degree of adhesion in each group gradually increased with the extension of splint fixation time. The adhesion degree of F-2 W was significantly higher than that of NF and F-1 W groups ( P <0.05). Additionally, the adhesion degree of F-3 W group was significantly higher than that of the other three groups ( P <0.005) (Table ).
The plantarflexion (Fig. ) and dorsiflexion (Fig. ) motion of NF group were significantly better than those of the other three groups under different weights ( P <0.001). The plantarflexion and dorsiflexion motion of F-3 W group were significantly lower than those of the other three groups under different weights ( P <0.001). Particularly, the ankle joint of the F-3 W group showed nearly no movement at a weight of 15 g. Although there was no statistical difference between F-1 W and F-2 W dorsiflexion range of motion ( P >0.05), overall ankle motion decreased with longer braking time.
In comparison of peak load among the groups, the peak load of the F-3 W group was significantly lower than the other three groups ( P <0.01). The stiffness of the F-3 W group was 21.09 ± 2.91 N/mm, which was significantly greater than that of the NF group at 18.56 ± 2.22 N/mm ( P <0.05). The peak load and stiffness of NF group, F-1 W group and F-2 W group were not statistically significant (Fig. ).
HE staining revealed that the regenerated collagen fibers in the F-3 W group exhibited lower density and organization compared to the other three groups. Furthermore, the F-3 W group displayed heightened vascularization and accumulation of inflammatory cells. Additionally, Masson’s trichrome staining indicated a decreased collagen fiber density in the F-3 W group relative to the other three groups. Image Pro Plus 6.0 image analysis software showed that the number of fibroblasts in the F-3 W group was significantly higher than in the other groups ( P <0.001). Bonar score was used to quantitatively evaluate the quality of regenerated tissue, and the scores of NF and F-1 W groups were significantly lower than those of the other two groups ( P < 0.05) (Fig. ).
Col-1 expression in NF group was higher than that in F-2 W and F-3 W groups ( P < 0.05), and Col-3 expression in F-3 W group was higher than that in NF and F-1 W groups ( P < 0.05). The ratio of Col-1:3 in the NF group was higher than in the other three groups ( P < 0.05), and the ratio of Col-1:3 in the F-1 W group was higher than in the F-3 W group ( P < 0.05). The expression of TGF-β1 was lower in the NF group than in the other three groups, while the expression of TGF-β1 was higher in the F-3 W group than in the other three groups ( P < 0.05). It is worth noting that there was no statistically significant difference in the immunohistochemical results between F-1 W and F-2 W ( P > 0.05) (Fig. ).
This study investigated the effects of different time to return to activity on the tendon healing of a rat Achilles tendon injury model. Through various experimental methods, we found that as the time to return to activity advanced, the strength of the tendon increased while the degree of adhesion decreased. In clinical practice, it is more beneficial for patients to use strong suture to withstand earlier rehabilitation exercises. Early functional exercise can elongate and relax the external connective tissue, reduce the contact between the anastomosis and the surrounding tissue, inhibit the growth of scar tissue, and prevent external adhesion . Additionally, mechanical stress stimulation can promote cell proliferation and tendon differentiation, thereby enhancing the strength of tendon healing . However, in a study by Godbout et al., immediate post-operative exercise appeared to result in a decrease in tendon mechanical properties . In this study, although the duration of active movement varied among the groups of rats, this was done to ensure that all rats had the same four-week period for tendon healing. This is inevitable in the case that the immobilization time is the variable and the healing time is consistent, and it can be regarded as another manifestation of different immobilization times. Therefore, we controlled the treadmill training time of all rats at the fourth week after surgery. Through this method, we avoided the re-rupture of the Achilles tendon caused by early high-intensity exercise in some groups, thus ensuring the authenticity of the study results. To safely remove the splint earlier, two measures need to be considered. The first is the strength of the tendon suture, and the second is the speed and quality of tendon healing. For suture strength, the most important factor is the number of sutures crossing the tendon ends, but excessive suturing can affect the blood supply of the tendon , . In this study, we used the double Kessler method to support early postoperative activity, and the number of stitches across the broken end is twice that of the traditional Kessler suture method. The incidence of tendon re-rupture under strong suture is much lower than that reported in clinic , . In addition, the method of sewing is also crucial. For patients, suture methods such as “Modified Lim”, “Tsai”, “Tsuge”, etc. may be able to withstand different intensity of rehabilitation exercises , . However, the influence of different suture methods on tendon healing and its strength still needs to be further studied. In recent years, most of the research on improving the quality of tendon healing focuses on two directions. One approach is autologous transplantation, including bone marrow concentrate, platelet products, and fat, which utilizes the rich growth factors to promote tendon healing , , . Another approach is local drug delivery, including Aspirin, Metformin, and even Sildenafil, to control inflammatory responses and improve the quality of tendon healing – . These methods have shown promising results in animal experiments, but they are currently less commonly used in clinical practice. We need to find a reliable measure among the many interventions that can enable most patients to safely and earlier return to activity, which will be our focus for further research. In the clinical study of tendon rupture, the benefits of early activity are undeniable , . In a prospective randomized controlled trial conducted by Deng et al., early mobilization not only improved the early functional outcomes of the ankle joint, but also resulted in earlier hospital discharge and return to work for patients . In a clinical study with over a decade of follow-up, the Leppilahti score of the early activity group was still higher than that of the control group in the late stage . However, there is still controversy over whether early rehabilitation activities will increase the incidence of re-rupture. Despite a wealth of studies showing the benefits of early activity, a recent Meta-analysis still supports the idea that immediate activity after Achilles tendon repair may increase the risk of re-rupture . This is also the reason for considering stronger suture methods in this study. According to a study by Aoto Sato et al. on the chicken flexor tendon, immobilization for more than 3 weeks would result in irreversible adhesion of the tendon. In this study, all rats were subjected to treadmill exercise at the end of the fourth week. The passive ankle range of motion, count of fibroblasts, biomechanical results, and immunohistochemical results were not different between the F-1 W and F-2 W groups. It may be that functional exercise improves some of the results. Nevertheless, the experimental results of the F-3 W group were significantly different from those of the other three groups, reflecting not only that prolonged immobilization can affect tendon healing and increase adhesion, but also that such effects may require a longer rehabilitation exercise or even cause irreversible functional loss. Taking all these factors into consideration, we believe that tendon rupture should be combined with a strong suture to enable the patient to return to activity within two weeks after surgery. Healing of tendon injury can be divided into three stages: inflammation, proliferation, and remodeling . During the proliferative phase, the synthesis of Col-3 reaches its peak and constitutes the main component of the extracellular matrix . However, the arrangement of Col-3 is disordered, its mechanical properties are poor, and it can inhibit the growth of collagen fiber diameter, which may be the reason for the decline in the biomechanical performance of tendon healing . Therefore, the Col-1:3 ratio is an important indicator for judging the quality of healing. In this study, the difference in the ratio of Col1:3 between the groups was more significant than comparing Col-1 or Col-3 alone. TGF-β1 is recognized as one of the most potent profibrogenic factors during the tendon healing process. For tendon injuries, changes in TGF-β1 exhibit a pattern of initial increase followed by decrease. Most studies support that TGF-β1 reaches its peak in 2 weeks and returns to normal around 4 weeks , . Additionally, it plays a multifunctional role in regulating all three stages of tendon healing . During the inflammatory phase, activated platelets release cytokines, particularly TGF-β1, which rapidly recruit inflammatory cells to the injury site and accelerate angiogenesis in an autocrine or paracrine manner , . The proliferation stage is characterized by a significant increase in fibrotic scar tissue and peak cell numbers in the repair area . TGF-β1 is closely associated with this phase as it strongly promotes fibrotic scar formation and controls various cell behaviors. During the remodeling stage, TGF-β1 can accelerate the remodeling process through collagen synthesis rather than degrading scar tissue . In this study, the expression of TGF-β1 in F-3 W was significantly higher than that in other groups. This suggests that one of the mechanisms by which early return to activity improves tendon healing quality may be that TGF-β1 expression returns to normal more quickly. We boldly hypothesize that the sustained high expression of TGF-β1 in rats with long-term immobilization is a compensation for poor tendon healing, but it also promotes exogenous tendon healing, leading to adhesion around the tendon. This conclusion can provide insight for future studies on tendon healing mechanisms. There are still several limitations in this study. First, this study only examined the tendon performance at 4 weeks after surgery, which means that the function of the tendon may still improve further with an extended follow-up period. Second, rats cannot develop a gradual rehabilitation program like humans do, taking into account that the contralateral limb may have some degree of compensation, even if they are trained on a treadmill with fixed exercises, which cannot guarantee that the weight-bearing and activity of the operated limb are consistent among different groups of rats. Finally, although the use of rat models to study tendon injury has become quite established, further clinical studies are needed to validate the accuracy of the results.
In the four groups of rats, the NF group had the best biomechanical performance and ankle passive range of motion, as well as the least degree of adhesion. Meanwhile, the healing quality of the tendons in the F-3 W group was significantly lower than that of the other three groups. This suggests that early return to activity under strong tendon sutures is more beneficial for improving patient outcomes. After a comprehensive observation of four groups of rats, it was found that as the time of return to activity was advanced, the tensile strength of the repaired tendon in rats increased and the degree of adhesion was reduced at 4 weeks postoperatively. These results indicate that it is necessary for patients with tendon injuries to explore safe and early methods of early return to activity.
|
Implementation of a competency-based medical education approach in public health and epidemiology training of medical students | 9e279f71-3f3c-467c-b2e8-fc4ad27d8695 | 5819693 | Preventive Medicine[mh] | Among the profound changes that have occurred in the practice of medicine in the twenty-first century are greater sophistication, high-technological dependence, a personalized approach and extreme increases in costs. Modern preventive medicine uses proactive interventions, surgery and chronic use of preventive medications. Clinical reasoning and clinical decision-making have expanded from being almost exclusively based on deterministic pathophysiological principles to include clinical and population-based evidence . Current medical practice is also multi-disciplinary, mandating coordinated teamwork. The need for stronger links between medicine and public health is ongoing, and includes the need for a clinical and public health workforce trained to collaborate in a multi-disciplinary environment . Increasingly complex epidemiological research methods require physicians to acquire broad competencies in research methodologies and statistics to enable their critical appraisal of the literature when making clinical decisions. Physicians’ use of evidence-based medicine (EBM) has gained importance for weighing benefits and harms of clinical decisions such as relating to diagnoses, disease prognosis and intervention. In parallel to the above, many changes have occurred in medical education. Medical training has shifted from frontal teaching and an observer-apprentice approach to a task oriented approach . Recommendations of the 2010 Carnegie report, which are being implemented in the US and the UK, include for example, the need to strengthen connections between formal and experiential knowledge across the continuum of medical education . In addition, up-to-date teaching should emphasize an evidence-based approach that empowers the medical student to actively search, rank, appraise, interpret and implement the evidence that is relevant to individual patients . Preventive medicine, which is often the most cost-effective medical approach, has become mandatory, to restrain the increasing costs of chronic disease care. For many years, public health was a marginalized low profile discipline in medical education . However, there is growing concern among medical schools of gaps in knowledge and competence of physicians in areas such as clinical preventive services, quantitative methods of risk and outcomes assessment, the practice of community medicine, and health services organization and delivery . Consequently, several organizations including the Association of American Medical Colleges, the Institute of Medicine (IOM), and the United Kingdom General Medical Council (GMC) have emphasized the importance of undergraduate medical training in the field of public health . The effect of physician’s health care practice on patients’ health care practice was demonstrated in the positive relationship found between physicians and patients in influenza vaccination rates . The Sackler School of Medicine at Tel-Aviv University was founded in 1964 with the goal of educating highly professional, knowledgeable and compassionate physicians. In accordance with the above-mentioned concerns, and as part of the implementation of a revised curriculum, a committee of medical doctor faculty members who are board certified in public health and experienced in epidemiological research, was convened in 2012–2013. The task of the committee was to evaluate and update objectives for the public health curriculum for medical students; to review and revise the current curriculum; to introduce a revised curriculum in public health; and to introduce appropriate teaching methods in accordance with the competency-based medical education (CBME) approach . This paper presents the process and recommendations of the committee, which were approved and adopted by the teaching committee of the Tel Aviv Sackler Medical School, and implemented during the past 4 years. Training medical students in public health Awareness has grown over the past 2 decades, to the importance of the public health discipline to clinicians, and to the need to instill medical students with competencies in public health . The Consensus Conference on Undergraduate Public Health Education advocated that all undergraduate medical students have access to an education in public health . The Association of American Medical Colleges and The Healthy People Curriculum Task Force published recommendations to include a population health curriculum as part of the 4 years of medical training . The IOM has since called for the US public health system to evolve from a government-centered system to involve broad partnerships with healthcare and other organizations in communities . In the working document, ‘Tomorrow’s Doctors’ , the UK GMC recommended that medical school education include education in disease prevention, sociological and psychological aspects of health and disease, population health, scientific research methods and critical appraisal of the literature . Medical schools in the US and the UK have been placing greater emphasis on the teaching of clinical prevention and health promotion . The need to dedicate a specific curriculum for the aspects of how the health system functions and what the role of the clinician in this system was recently recognized by the AMA educational consortium, which published a book on health systems science in medical education, calling to bring forth the “third pillar”, which was until now “part of the hidden curriculum in medical education”, intertwining with the other two (traditional) pillars: basic science and clinical science . The understanding of how physicians deliver care to patients, how patients receive care, and how health systems function, are recognized as a pillar which necessitate medical students training as part of the need to align medical education with the ongoing changes in health care delivery. Examples of changes over the last decades in the curricula of public health training in several medical schools around the world Competencies in statistics and epidemiology as tools for conducting and understanding quantitative medical research A historical view of statistics training was that physicians need to know statistics primarily if they were conducting or going to conduct research during their medical career; and when conducting research, they could generally rely on professional consultation with statisticians . Nowadays, physicians use statistics and probability methods for a wide range of activities . Statistics and related competencies are used in daily clinical practice for understanding the validity and precision of study results, explaining risk to patients, comparing treatment protocols and outcomes, interpreting the relevance and implications of diagnostic test results, interacting with drug representatives and reading pharmaceutical literature . Physicians need to be capable of interpreting clinical epidemiology data and of understanding the limitations of research and statistical inference. The sophisticated statistical methods that are used in an increasing number of studies necessitate good understanding of statistics to appraise the scientific literature. Surveys conducted in various countries show a need for improving skills of epidemiological research, statistical inference and data analysis among physicians and medical students . Almost half of UK physicians who responded to a questionnaire felt that statistics training did not seem useful during their attendance at medical school; however, 73% felt that statistics were relevant to their subsequent careers and that teaching statistics should include lectures, seminars and problem-based practical exercises . The authors recommended that statistical training should start early and continue throughout medical school; and be presented at an understandable level, which is practical and integrated with other subject areas . During the 1960’s at Harvard Medical School there was a long-running required Biostatistics course. By the 1970s there was an elective course, taken by a third of the class that was called, “Introduction to Biostatistics and Epidemiology.” By the early 1980s a clinical-decision making course was added; and today that same course would be called “Evidence Based Medicine” (EBM). In the last decade, Harvard Medical School implemented a course for first-year medical and dental students entitled “Clinical Epidemiology and Population Health” . The objectives of the course were to instill knowledge in basic epidemiology and biostatistics, causal inference, confounding and other issues related to research interpretation, decision making and skills for clinical and population-level interventions, health promotion and behavior change strategies, physicians’ roles in the public health system and population level surveillance. A few years ago, the University of Toronto initiated a 4 year course for undergraduate medical students, which broke down the barrier between the pre-clerkship period and clinical clerkships . Based on a longitudinal, “spiral” curriculum, the course revisits educational concepts at increasing levels of complexity across the curriculum. Descriptive epidemiology is taught in the first year, analytic epidemiology in the second year and clinical epidemiology in the third and fourth years. Similarly, the basic structure of the healthcare system is taught in the first year; then a project involving organization of community-based services in the second year; quality improvement and patient safety in the third year; and the effect of physicians’ payment systems on quality of patient care in the fourth year. After the change in the organization of the course material into the longitudinal curriculum with no change in the number of hours of learning, the ranking of the University of Toronto’s training in public health improved and became number one among all medical schools in Canada. Evidence based medicine The early introduction of EBM in medical schools has been effective in changing the thought process of the medical graduates. It was also found to increase the ability for logical and critical appraisal, better suited for the understanding of the disease process and subsequent management . In England, a six-week full time course linking EBM with ethics and the management of change in health services was introduced for third-year undergraduate medical students in Imperial College London . The students undertook projects such as hand washing in a neonatal unit to prevent infections, drug monitoring in the elderly to reduce the risk of falls, and the use of peak flow meters in the management of asthma. The course supported the notion that undergraduates and junior clinical students can adopt and promote significant changes that make clinical care more evidence-based. Health promotion Health Promotion is a resource for theoretical knowledge and practical skills in health issues, such as sexual health, nutrition, physical activity, exercise and fitness, weight control, and alcohol and tobacco control. In 2010, less than half of the schools in the UK included sports and exercise medicine as part of their curriculum. King’s College London introduced exercise medicine, which focused on the health benefits of physical activity, the doctor’s role in assessing and prescribing physical activity, and the physiological adaptations and risks of physical activity . The intervention significantly improved the confidence of preclinical medical students in their ability to counsel patients on the health benefits of physical activity, as well as their knowledge of recommended physical activity guidelines . Medical students who underwent obesity intervention education scored higher on relevant knowledge, had more self-confidence in physical activity and nutrition counseling, and took more waist-hip measurements . In a community health center serving a Latino immigrant population in the United States, a 9-month pilot course for medical students that combined didactic instruction in the social determinants of health with practical experience in developing, implementing and evaluating an intervention was shown to be feasible and effective . Summarizing the above, the urgent need to strengthen the education of medical students in the field of epidemiology and public health in an integrative manner during the pre-clinical and clinical years, has become evident in many countries and action has been taken. Several challenges have had to be met, including the “old” perception that this topic is of little relevance to clinical practice, low funding, low institutional priority and the competition with other traditional fields (e.g. anatomy, physiology, biochemistry and histology) . Nonetheless, recognition of the importance of this field has increased dramatically . Findings and insights The experience of Sackler Faculty of Medicine in the adoption implementation and evaluation of competency-based medical education in public health A committee was appointed in 2012 to propose a competencies oriented curriculum in public health for medical students. Our form of action was multistep, much like the Situational Model starting with mapping the courses provided by our department (the department for Epidemiology and Preventive Medicine) to the curriculum of the 6-year medical training. In parallel, we defined the required competencies, expected from a medical student and a clinician, in public health. We then looked into each course syllabus and pointed at gaps as well as overlaps between courses. Finally, we proposed a revised curriculum in public health that incorporates all of our conclusions and suggestions. This was presented to the Faculty of Medicine’s Educational Committee and approved by the Dean after adjustments were made according to the Faculty’s constraints. We continuously review the courses’ evaluations students voluntarily and anonymously fill in the Web-based university portal, and modify the courses accordingly. Defining the required competencies The committee defined 3 main goals of training of medical students according to their future needs and responsibilities: a) critical appraisal of the scientific literature to inform practice; b) conducting research using epidemiological tools and methods; and c) practicing and advocating health promotion and disease prevention in the clinic. Following these goals the main competencies physicians require were defined: Skills to appraise the quality of the various types of epidemiologic research and to acquire tools for comprehensive reading and understanding scientific literature according to EBM; Competency in efficient and precise literature search; Competency in basic statistical skills; Competency in planning and conducting research, i.e. knowledge of epidemiological methods including the various study designs, choice of an appropriate study population, methods for data collection, analysis and interpretation of study results; Competency in applying health promoting principles and strategies in the selection of disease prevention measures and recommendations; Competency in implementation of EBM techniques in public health decision making, e.g. immunizations and population screening; and Competency in examining and analyzing disease trends from a population perspective. In addition, we identified the importance of understanding the structure of health systems and of increasing the awareness of the role of the physician in these systems as a means of better pursuing the skill of practicing and advocating health promotion and disease prevention in the clinic. Identifying gaps and needs to meet the required competencies The committee performed an overview of all relevant education and training syllabus at the Sackler School of Medicine of the Tel-Aviv University. All lectures in each course were reviewed and overlapping topics given in more than one lecture were identified. This process also enabled detecting important topics that were absent in the curriculum. The committee met all teachers and instructors and reviewed the courses syllabus with them. Those with overlapping lectures were asked to meet and revise their courses so that no unnecessary overlaps persisted. Two new courses were planned to fill in the gaps in important topics. The entire 6 year curriculum was presented and approved, first to the faculty of the School of Public Health, and then to the faculty of Sackler School of Medicine (see Table ). Implementing the competency –based medical education approach The new public health curriculum in our medical school is based on a longitudinal approach and was designed to harmonize and integrate the clinical and public health teaching to increase relevance, and to address the above-mentioned competencies. The public health curriculum starts early in the first year of medical school and progresses systematically, with each year building on competencies already gained. The goal is efficient utilization of time and avoidance of repetitions. The limited timeframe allocated to public health training within the busy and competitive medical school curriculum is a constraint of the program. The courses and skills provided in the longitudinal public-health curriculum as part of the 6 year medical training of the Sackler Medical School are the following (see Fig. , illustrating the concept that epidemiology and statistics are the foundation, and are given a substantial number of hours in the curriculum, on which medical students are gradually building their public health knowledge, with the number of hours gradually decreasing yet the topics learned are more sophisticated, so that in their last year a relatively smaller, albeit very important, part of the clerkships will draw on this learning): Epidemiology, statistics, and research methods (1st year): this course was re-designed to achieve a comprehensive and integrative understanding of key epidemiologic and biostatistics methods. The goals of the course are to improve students’ abilities to understand and interpret epidemiological studies and to provide practical experience in epidemiological research, study design, and key methods in biostatistics. Topics covered in the course include: the ability to integrate information and data, build statistical models, conduct data analysis, and acquire tools for decision making in selecting diagnostic tools and treatment protocols. Also emphasized are implementation of statistical and epidemiological tools for understanding disease risk and prevention, etiology and prognosis, and evaluating the success and clinical relevance of preventive interventions. The fundamentals of biostatistics and epidemiology are taught together, highlighting the relevance of these two disciplines to the understanding and interpretation of medical data. Health promotion: The physician’s role (2nd year): This is one of two courses initiated following the committee’s detection of gaps in training medical students. Using epidemiological concepts and terms acquired during the first year, students are introduced to the main concepts, principles, and methods of health promotion at the individual and population levels. Students practice communicating and marketing healthy lifestyle to patients and gain knowledge of the impact of a health promoting environment (e.g. media campaigns, regulatory tools at the local and the national levels) on adoption of a healthy lifestyle. The course started as an 8-week short course but was broadened during the year 2015–2016, to include three sessions on exercise and physical activity: the approach to medical examinations before starting a physical activity program in healthy and diseased patients; the responsibility of the physician to evaluate the level of physical activity of their patients and to encourage them to exercise (Hoffman, et al. 2016); and the comprehensive physical activity prescription, which is a required responsibility of physicians to be provided to each of their patients who enters an exercise program (Joy, et al. 2016). This last session includes the students’ writing their own exercise prescription and a practical experience in training according to this prescription. An additional topic is a two lecture session in oral hygiene and its association to systemic diseases and medications. Selected paradigms in epidemiology and public health (3rd year): Following the basic course in epidemiology and biostatistics in the first year, this intensive one-week course gives an overview of the epidemiology of specific diseases and conditions such as cancer, cardiovascular disease, diabetes, infectious diseases, geriatric and childhood diseases, maternal and child health, and psychiatric illnesses. The course emphasizes the specific methodologies used for the study of these illnesses and conditions and presents the specific disease registries available. The second part of the course focuses on the national health system, and aims to elucidate the role of the clinician as a public health promoter in the national health system. The paradigm of combining health policy with clinical decision making is emphasized, using relevant and timely examples. Tools for practicing Evidence Based Medicine (EBM) (3rd year): Tools and techniques are provided for practicing EBM, by means of workshops and simulations of real life situations. At the end of the course, the student should be able to frame a clinical question in view of a specific clinical situation, search the medical literature, obtain the most relevant material, and critically appraise the literature so as to achieve the best available solution to the clinical question. This course reinforces the competencies provided in the first and second years and requires the student to apply them. The use of epidemiologic methods in clinical decision making (3rd year): This course provides the epidemiological background to the major body organs and systems taught in the third and fourth years, while focusing on how epidemiology is used for clinical decision-making. Specific examples are presented from body systems such as the gastrointestinal and urinary tracts. The course is intended to reinforce skills covered in the first year, while exploiting the advanced stage attained in the students’ basic medical knowledge. E-learning course in planning and writing research proposals for the M.D. thesis (4th - 6th year): This electronic course is designed to provide students with the necessary competencies to develop research questions and to formulate the research methodology relevant to their MD thesis. The course is built on the knowledge and capabilities of implementing the competencies taught during previous years; and it is presented through a set of online guided tools. Clerkship in public health and epidemiology (6th year): Experiential learning in EBM in public health. During this 1 week interactive workshop the students experience the implementation of epidemiological tools from data collection and analysis to public health planning and decision making. The course includes practical examples such as prevention of cervical cancer or the implementation of various programs for secondary prevention of breast cancer and their impact on breast cancer mortality. As in other clinical clerkships, the students experience the process of decision making. In this case it relates to decisions in public health. At this stage, just before graduation, the students have most of the medical knowledge they will acquire during their MD degree. They have the ability to use clinical and epidemiological competencies to understand the broad range of considerations involved in health policy at the individual and population levels. Program evaluation The revised public health curriculum was implemented with first year students during 2013–2016/17. We have been revising and refining the courses of the first, second and third years according to feedback from students and lecturers. All courses in our school have a computerized feedback system, which is opened from the last lecture till the final exam, and is filled on a voluntary anonymous basis. In addition, meetings are held with the students’ representatives to discuss their expectations and feedback, and an attempt to integrate necessary changes in the courses is continuously performed. In the coming academic year (2017–18) the last class from the old curriculum will graduate. At the end of this year we will conduct a survey among these students during the clerkship in public health to evaluate their perceived understanding of public health topics and of the competencies we intended to convey in our curriculum. We will repeat this survey among the following class – the first to experience the full 6-years revised curriculum, and compare the results. In the future we intend to assess the quality of MD theses submitted at graduation, according to exposure to the intervention, and to compare evaluations of EBM skills during clinical clerkships. We expect more MD theses to be published as papers in peer-reviewed international journals. Awareness has grown over the past 2 decades, to the importance of the public health discipline to clinicians, and to the need to instill medical students with competencies in public health . The Consensus Conference on Undergraduate Public Health Education advocated that all undergraduate medical students have access to an education in public health . The Association of American Medical Colleges and The Healthy People Curriculum Task Force published recommendations to include a population health curriculum as part of the 4 years of medical training . The IOM has since called for the US public health system to evolve from a government-centered system to involve broad partnerships with healthcare and other organizations in communities . In the working document, ‘Tomorrow’s Doctors’ , the UK GMC recommended that medical school education include education in disease prevention, sociological and psychological aspects of health and disease, population health, scientific research methods and critical appraisal of the literature . Medical schools in the US and the UK have been placing greater emphasis on the teaching of clinical prevention and health promotion . The need to dedicate a specific curriculum for the aspects of how the health system functions and what the role of the clinician in this system was recently recognized by the AMA educational consortium, which published a book on health systems science in medical education, calling to bring forth the “third pillar”, which was until now “part of the hidden curriculum in medical education”, intertwining with the other two (traditional) pillars: basic science and clinical science . The understanding of how physicians deliver care to patients, how patients receive care, and how health systems function, are recognized as a pillar which necessitate medical students training as part of the need to align medical education with the ongoing changes in health care delivery. Competencies in statistics and epidemiology as tools for conducting and understanding quantitative medical research A historical view of statistics training was that physicians need to know statistics primarily if they were conducting or going to conduct research during their medical career; and when conducting research, they could generally rely on professional consultation with statisticians . Nowadays, physicians use statistics and probability methods for a wide range of activities . Statistics and related competencies are used in daily clinical practice for understanding the validity and precision of study results, explaining risk to patients, comparing treatment protocols and outcomes, interpreting the relevance and implications of diagnostic test results, interacting with drug representatives and reading pharmaceutical literature . Physicians need to be capable of interpreting clinical epidemiology data and of understanding the limitations of research and statistical inference. The sophisticated statistical methods that are used in an increasing number of studies necessitate good understanding of statistics to appraise the scientific literature. Surveys conducted in various countries show a need for improving skills of epidemiological research, statistical inference and data analysis among physicians and medical students . Almost half of UK physicians who responded to a questionnaire felt that statistics training did not seem useful during their attendance at medical school; however, 73% felt that statistics were relevant to their subsequent careers and that teaching statistics should include lectures, seminars and problem-based practical exercises . The authors recommended that statistical training should start early and continue throughout medical school; and be presented at an understandable level, which is practical and integrated with other subject areas . During the 1960’s at Harvard Medical School there was a long-running required Biostatistics course. By the 1970s there was an elective course, taken by a third of the class that was called, “Introduction to Biostatistics and Epidemiology.” By the early 1980s a clinical-decision making course was added; and today that same course would be called “Evidence Based Medicine” (EBM). In the last decade, Harvard Medical School implemented a course for first-year medical and dental students entitled “Clinical Epidemiology and Population Health” . The objectives of the course were to instill knowledge in basic epidemiology and biostatistics, causal inference, confounding and other issues related to research interpretation, decision making and skills for clinical and population-level interventions, health promotion and behavior change strategies, physicians’ roles in the public health system and population level surveillance. A few years ago, the University of Toronto initiated a 4 year course for undergraduate medical students, which broke down the barrier between the pre-clerkship period and clinical clerkships . Based on a longitudinal, “spiral” curriculum, the course revisits educational concepts at increasing levels of complexity across the curriculum. Descriptive epidemiology is taught in the first year, analytic epidemiology in the second year and clinical epidemiology in the third and fourth years. Similarly, the basic structure of the healthcare system is taught in the first year; then a project involving organization of community-based services in the second year; quality improvement and patient safety in the third year; and the effect of physicians’ payment systems on quality of patient care in the fourth year. After the change in the organization of the course material into the longitudinal curriculum with no change in the number of hours of learning, the ranking of the University of Toronto’s training in public health improved and became number one among all medical schools in Canada. Evidence based medicine The early introduction of EBM in medical schools has been effective in changing the thought process of the medical graduates. It was also found to increase the ability for logical and critical appraisal, better suited for the understanding of the disease process and subsequent management . In England, a six-week full time course linking EBM with ethics and the management of change in health services was introduced for third-year undergraduate medical students in Imperial College London . The students undertook projects such as hand washing in a neonatal unit to prevent infections, drug monitoring in the elderly to reduce the risk of falls, and the use of peak flow meters in the management of asthma. The course supported the notion that undergraduates and junior clinical students can adopt and promote significant changes that make clinical care more evidence-based. Health promotion Health Promotion is a resource for theoretical knowledge and practical skills in health issues, such as sexual health, nutrition, physical activity, exercise and fitness, weight control, and alcohol and tobacco control. In 2010, less than half of the schools in the UK included sports and exercise medicine as part of their curriculum. King’s College London introduced exercise medicine, which focused on the health benefits of physical activity, the doctor’s role in assessing and prescribing physical activity, and the physiological adaptations and risks of physical activity . The intervention significantly improved the confidence of preclinical medical students in their ability to counsel patients on the health benefits of physical activity, as well as their knowledge of recommended physical activity guidelines . Medical students who underwent obesity intervention education scored higher on relevant knowledge, had more self-confidence in physical activity and nutrition counseling, and took more waist-hip measurements . In a community health center serving a Latino immigrant population in the United States, a 9-month pilot course for medical students that combined didactic instruction in the social determinants of health with practical experience in developing, implementing and evaluating an intervention was shown to be feasible and effective . Summarizing the above, the urgent need to strengthen the education of medical students in the field of epidemiology and public health in an integrative manner during the pre-clinical and clinical years, has become evident in many countries and action has been taken. Several challenges have had to be met, including the “old” perception that this topic is of little relevance to clinical practice, low funding, low institutional priority and the competition with other traditional fields (e.g. anatomy, physiology, biochemistry and histology) . Nonetheless, recognition of the importance of this field has increased dramatically . A historical view of statistics training was that physicians need to know statistics primarily if they were conducting or going to conduct research during their medical career; and when conducting research, they could generally rely on professional consultation with statisticians . Nowadays, physicians use statistics and probability methods for a wide range of activities . Statistics and related competencies are used in daily clinical practice for understanding the validity and precision of study results, explaining risk to patients, comparing treatment protocols and outcomes, interpreting the relevance and implications of diagnostic test results, interacting with drug representatives and reading pharmaceutical literature . Physicians need to be capable of interpreting clinical epidemiology data and of understanding the limitations of research and statistical inference. The sophisticated statistical methods that are used in an increasing number of studies necessitate good understanding of statistics to appraise the scientific literature. Surveys conducted in various countries show a need for improving skills of epidemiological research, statistical inference and data analysis among physicians and medical students . Almost half of UK physicians who responded to a questionnaire felt that statistics training did not seem useful during their attendance at medical school; however, 73% felt that statistics were relevant to their subsequent careers and that teaching statistics should include lectures, seminars and problem-based practical exercises . The authors recommended that statistical training should start early and continue throughout medical school; and be presented at an understandable level, which is practical and integrated with other subject areas . During the 1960’s at Harvard Medical School there was a long-running required Biostatistics course. By the 1970s there was an elective course, taken by a third of the class that was called, “Introduction to Biostatistics and Epidemiology.” By the early 1980s a clinical-decision making course was added; and today that same course would be called “Evidence Based Medicine” (EBM). In the last decade, Harvard Medical School implemented a course for first-year medical and dental students entitled “Clinical Epidemiology and Population Health” . The objectives of the course were to instill knowledge in basic epidemiology and biostatistics, causal inference, confounding and other issues related to research interpretation, decision making and skills for clinical and population-level interventions, health promotion and behavior change strategies, physicians’ roles in the public health system and population level surveillance. A few years ago, the University of Toronto initiated a 4 year course for undergraduate medical students, which broke down the barrier between the pre-clerkship period and clinical clerkships . Based on a longitudinal, “spiral” curriculum, the course revisits educational concepts at increasing levels of complexity across the curriculum. Descriptive epidemiology is taught in the first year, analytic epidemiology in the second year and clinical epidemiology in the third and fourth years. Similarly, the basic structure of the healthcare system is taught in the first year; then a project involving organization of community-based services in the second year; quality improvement and patient safety in the third year; and the effect of physicians’ payment systems on quality of patient care in the fourth year. After the change in the organization of the course material into the longitudinal curriculum with no change in the number of hours of learning, the ranking of the University of Toronto’s training in public health improved and became number one among all medical schools in Canada. The early introduction of EBM in medical schools has been effective in changing the thought process of the medical graduates. It was also found to increase the ability for logical and critical appraisal, better suited for the understanding of the disease process and subsequent management . In England, a six-week full time course linking EBM with ethics and the management of change in health services was introduced for third-year undergraduate medical students in Imperial College London . The students undertook projects such as hand washing in a neonatal unit to prevent infections, drug monitoring in the elderly to reduce the risk of falls, and the use of peak flow meters in the management of asthma. The course supported the notion that undergraduates and junior clinical students can adopt and promote significant changes that make clinical care more evidence-based. Health Promotion is a resource for theoretical knowledge and practical skills in health issues, such as sexual health, nutrition, physical activity, exercise and fitness, weight control, and alcohol and tobacco control. In 2010, less than half of the schools in the UK included sports and exercise medicine as part of their curriculum. King’s College London introduced exercise medicine, which focused on the health benefits of physical activity, the doctor’s role in assessing and prescribing physical activity, and the physiological adaptations and risks of physical activity . The intervention significantly improved the confidence of preclinical medical students in their ability to counsel patients on the health benefits of physical activity, as well as their knowledge of recommended physical activity guidelines . Medical students who underwent obesity intervention education scored higher on relevant knowledge, had more self-confidence in physical activity and nutrition counseling, and took more waist-hip measurements . In a community health center serving a Latino immigrant population in the United States, a 9-month pilot course for medical students that combined didactic instruction in the social determinants of health with practical experience in developing, implementing and evaluating an intervention was shown to be feasible and effective . Summarizing the above, the urgent need to strengthen the education of medical students in the field of epidemiology and public health in an integrative manner during the pre-clinical and clinical years, has become evident in many countries and action has been taken. Several challenges have had to be met, including the “old” perception that this topic is of little relevance to clinical practice, low funding, low institutional priority and the competition with other traditional fields (e.g. anatomy, physiology, biochemistry and histology) . Nonetheless, recognition of the importance of this field has increased dramatically . The experience of Sackler Faculty of Medicine in the adoption implementation and evaluation of competency-based medical education in public health A committee was appointed in 2012 to propose a competencies oriented curriculum in public health for medical students. Our form of action was multistep, much like the Situational Model starting with mapping the courses provided by our department (the department for Epidemiology and Preventive Medicine) to the curriculum of the 6-year medical training. In parallel, we defined the required competencies, expected from a medical student and a clinician, in public health. We then looked into each course syllabus and pointed at gaps as well as overlaps between courses. Finally, we proposed a revised curriculum in public health that incorporates all of our conclusions and suggestions. This was presented to the Faculty of Medicine’s Educational Committee and approved by the Dean after adjustments were made according to the Faculty’s constraints. We continuously review the courses’ evaluations students voluntarily and anonymously fill in the Web-based university portal, and modify the courses accordingly. Defining the required competencies The committee defined 3 main goals of training of medical students according to their future needs and responsibilities: a) critical appraisal of the scientific literature to inform practice; b) conducting research using epidemiological tools and methods; and c) practicing and advocating health promotion and disease prevention in the clinic. Following these goals the main competencies physicians require were defined: Skills to appraise the quality of the various types of epidemiologic research and to acquire tools for comprehensive reading and understanding scientific literature according to EBM; Competency in efficient and precise literature search; Competency in basic statistical skills; Competency in planning and conducting research, i.e. knowledge of epidemiological methods including the various study designs, choice of an appropriate study population, methods for data collection, analysis and interpretation of study results; Competency in applying health promoting principles and strategies in the selection of disease prevention measures and recommendations; Competency in implementation of EBM techniques in public health decision making, e.g. immunizations and population screening; and Competency in examining and analyzing disease trends from a population perspective. In addition, we identified the importance of understanding the structure of health systems and of increasing the awareness of the role of the physician in these systems as a means of better pursuing the skill of practicing and advocating health promotion and disease prevention in the clinic. Identifying gaps and needs to meet the required competencies The committee performed an overview of all relevant education and training syllabus at the Sackler School of Medicine of the Tel-Aviv University. All lectures in each course were reviewed and overlapping topics given in more than one lecture were identified. This process also enabled detecting important topics that were absent in the curriculum. The committee met all teachers and instructors and reviewed the courses syllabus with them. Those with overlapping lectures were asked to meet and revise their courses so that no unnecessary overlaps persisted. Two new courses were planned to fill in the gaps in important topics. The entire 6 year curriculum was presented and approved, first to the faculty of the School of Public Health, and then to the faculty of Sackler School of Medicine (see Table ). Implementing the competency –based medical education approach The new public health curriculum in our medical school is based on a longitudinal approach and was designed to harmonize and integrate the clinical and public health teaching to increase relevance, and to address the above-mentioned competencies. The public health curriculum starts early in the first year of medical school and progresses systematically, with each year building on competencies already gained. The goal is efficient utilization of time and avoidance of repetitions. The limited timeframe allocated to public health training within the busy and competitive medical school curriculum is a constraint of the program. The courses and skills provided in the longitudinal public-health curriculum as part of the 6 year medical training of the Sackler Medical School are the following (see Fig. , illustrating the concept that epidemiology and statistics are the foundation, and are given a substantial number of hours in the curriculum, on which medical students are gradually building their public health knowledge, with the number of hours gradually decreasing yet the topics learned are more sophisticated, so that in their last year a relatively smaller, albeit very important, part of the clerkships will draw on this learning): Epidemiology, statistics, and research methods (1st year): this course was re-designed to achieve a comprehensive and integrative understanding of key epidemiologic and biostatistics methods. The goals of the course are to improve students’ abilities to understand and interpret epidemiological studies and to provide practical experience in epidemiological research, study design, and key methods in biostatistics. Topics covered in the course include: the ability to integrate information and data, build statistical models, conduct data analysis, and acquire tools for decision making in selecting diagnostic tools and treatment protocols. Also emphasized are implementation of statistical and epidemiological tools for understanding disease risk and prevention, etiology and prognosis, and evaluating the success and clinical relevance of preventive interventions. The fundamentals of biostatistics and epidemiology are taught together, highlighting the relevance of these two disciplines to the understanding and interpretation of medical data. Health promotion: The physician’s role (2nd year): This is one of two courses initiated following the committee’s detection of gaps in training medical students. Using epidemiological concepts and terms acquired during the first year, students are introduced to the main concepts, principles, and methods of health promotion at the individual and population levels. Students practice communicating and marketing healthy lifestyle to patients and gain knowledge of the impact of a health promoting environment (e.g. media campaigns, regulatory tools at the local and the national levels) on adoption of a healthy lifestyle. The course started as an 8-week short course but was broadened during the year 2015–2016, to include three sessions on exercise and physical activity: the approach to medical examinations before starting a physical activity program in healthy and diseased patients; the responsibility of the physician to evaluate the level of physical activity of their patients and to encourage them to exercise (Hoffman, et al. 2016); and the comprehensive physical activity prescription, which is a required responsibility of physicians to be provided to each of their patients who enters an exercise program (Joy, et al. 2016). This last session includes the students’ writing their own exercise prescription and a practical experience in training according to this prescription. An additional topic is a two lecture session in oral hygiene and its association to systemic diseases and medications. Selected paradigms in epidemiology and public health (3rd year): Following the basic course in epidemiology and biostatistics in the first year, this intensive one-week course gives an overview of the epidemiology of specific diseases and conditions such as cancer, cardiovascular disease, diabetes, infectious diseases, geriatric and childhood diseases, maternal and child health, and psychiatric illnesses. The course emphasizes the specific methodologies used for the study of these illnesses and conditions and presents the specific disease registries available. The second part of the course focuses on the national health system, and aims to elucidate the role of the clinician as a public health promoter in the national health system. The paradigm of combining health policy with clinical decision making is emphasized, using relevant and timely examples. Tools for practicing Evidence Based Medicine (EBM) (3rd year): Tools and techniques are provided for practicing EBM, by means of workshops and simulations of real life situations. At the end of the course, the student should be able to frame a clinical question in view of a specific clinical situation, search the medical literature, obtain the most relevant material, and critically appraise the literature so as to achieve the best available solution to the clinical question. This course reinforces the competencies provided in the first and second years and requires the student to apply them. The use of epidemiologic methods in clinical decision making (3rd year): This course provides the epidemiological background to the major body organs and systems taught in the third and fourth years, while focusing on how epidemiology is used for clinical decision-making. Specific examples are presented from body systems such as the gastrointestinal and urinary tracts. The course is intended to reinforce skills covered in the first year, while exploiting the advanced stage attained in the students’ basic medical knowledge. E-learning course in planning and writing research proposals for the M.D. thesis (4th - 6th year): This electronic course is designed to provide students with the necessary competencies to develop research questions and to formulate the research methodology relevant to their MD thesis. The course is built on the knowledge and capabilities of implementing the competencies taught during previous years; and it is presented through a set of online guided tools. Clerkship in public health and epidemiology (6th year): Experiential learning in EBM in public health. During this 1 week interactive workshop the students experience the implementation of epidemiological tools from data collection and analysis to public health planning and decision making. The course includes practical examples such as prevention of cervical cancer or the implementation of various programs for secondary prevention of breast cancer and their impact on breast cancer mortality. As in other clinical clerkships, the students experience the process of decision making. In this case it relates to decisions in public health. At this stage, just before graduation, the students have most of the medical knowledge they will acquire during their MD degree. They have the ability to use clinical and epidemiological competencies to understand the broad range of considerations involved in health policy at the individual and population levels. Program evaluation The revised public health curriculum was implemented with first year students during 2013–2016/17. We have been revising and refining the courses of the first, second and third years according to feedback from students and lecturers. All courses in our school have a computerized feedback system, which is opened from the last lecture till the final exam, and is filled on a voluntary anonymous basis. In addition, meetings are held with the students’ representatives to discuss their expectations and feedback, and an attempt to integrate necessary changes in the courses is continuously performed. In the coming academic year (2017–18) the last class from the old curriculum will graduate. At the end of this year we will conduct a survey among these students during the clerkship in public health to evaluate their perceived understanding of public health topics and of the competencies we intended to convey in our curriculum. We will repeat this survey among the following class – the first to experience the full 6-years revised curriculum, and compare the results. In the future we intend to assess the quality of MD theses submitted at graduation, according to exposure to the intervention, and to compare evaluations of EBM skills during clinical clerkships. We expect more MD theses to be published as papers in peer-reviewed international journals. A committee was appointed in 2012 to propose a competencies oriented curriculum in public health for medical students. Our form of action was multistep, much like the Situational Model starting with mapping the courses provided by our department (the department for Epidemiology and Preventive Medicine) to the curriculum of the 6-year medical training. In parallel, we defined the required competencies, expected from a medical student and a clinician, in public health. We then looked into each course syllabus and pointed at gaps as well as overlaps between courses. Finally, we proposed a revised curriculum in public health that incorporates all of our conclusions and suggestions. This was presented to the Faculty of Medicine’s Educational Committee and approved by the Dean after adjustments were made according to the Faculty’s constraints. We continuously review the courses’ evaluations students voluntarily and anonymously fill in the Web-based university portal, and modify the courses accordingly. Defining the required competencies The committee defined 3 main goals of training of medical students according to their future needs and responsibilities: a) critical appraisal of the scientific literature to inform practice; b) conducting research using epidemiological tools and methods; and c) practicing and advocating health promotion and disease prevention in the clinic. Following these goals the main competencies physicians require were defined: Skills to appraise the quality of the various types of epidemiologic research and to acquire tools for comprehensive reading and understanding scientific literature according to EBM; Competency in efficient and precise literature search; Competency in basic statistical skills; Competency in planning and conducting research, i.e. knowledge of epidemiological methods including the various study designs, choice of an appropriate study population, methods for data collection, analysis and interpretation of study results; Competency in applying health promoting principles and strategies in the selection of disease prevention measures and recommendations; Competency in implementation of EBM techniques in public health decision making, e.g. immunizations and population screening; and Competency in examining and analyzing disease trends from a population perspective. In addition, we identified the importance of understanding the structure of health systems and of increasing the awareness of the role of the physician in these systems as a means of better pursuing the skill of practicing and advocating health promotion and disease prevention in the clinic. Identifying gaps and needs to meet the required competencies The committee performed an overview of all relevant education and training syllabus at the Sackler School of Medicine of the Tel-Aviv University. All lectures in each course were reviewed and overlapping topics given in more than one lecture were identified. This process also enabled detecting important topics that were absent in the curriculum. The committee met all teachers and instructors and reviewed the courses syllabus with them. Those with overlapping lectures were asked to meet and revise their courses so that no unnecessary overlaps persisted. Two new courses were planned to fill in the gaps in important topics. The entire 6 year curriculum was presented and approved, first to the faculty of the School of Public Health, and then to the faculty of Sackler School of Medicine (see Table ). Implementing the competency –based medical education approach The new public health curriculum in our medical school is based on a longitudinal approach and was designed to harmonize and integrate the clinical and public health teaching to increase relevance, and to address the above-mentioned competencies. The public health curriculum starts early in the first year of medical school and progresses systematically, with each year building on competencies already gained. The goal is efficient utilization of time and avoidance of repetitions. The limited timeframe allocated to public health training within the busy and competitive medical school curriculum is a constraint of the program. The courses and skills provided in the longitudinal public-health curriculum as part of the 6 year medical training of the Sackler Medical School are the following (see Fig. , illustrating the concept that epidemiology and statistics are the foundation, and are given a substantial number of hours in the curriculum, on which medical students are gradually building their public health knowledge, with the number of hours gradually decreasing yet the topics learned are more sophisticated, so that in their last year a relatively smaller, albeit very important, part of the clerkships will draw on this learning): Epidemiology, statistics, and research methods (1st year): this course was re-designed to achieve a comprehensive and integrative understanding of key epidemiologic and biostatistics methods. The goals of the course are to improve students’ abilities to understand and interpret epidemiological studies and to provide practical experience in epidemiological research, study design, and key methods in biostatistics. Topics covered in the course include: the ability to integrate information and data, build statistical models, conduct data analysis, and acquire tools for decision making in selecting diagnostic tools and treatment protocols. Also emphasized are implementation of statistical and epidemiological tools for understanding disease risk and prevention, etiology and prognosis, and evaluating the success and clinical relevance of preventive interventions. The fundamentals of biostatistics and epidemiology are taught together, highlighting the relevance of these two disciplines to the understanding and interpretation of medical data. Health promotion: The physician’s role (2nd year): This is one of two courses initiated following the committee’s detection of gaps in training medical students. Using epidemiological concepts and terms acquired during the first year, students are introduced to the main concepts, principles, and methods of health promotion at the individual and population levels. Students practice communicating and marketing healthy lifestyle to patients and gain knowledge of the impact of a health promoting environment (e.g. media campaigns, regulatory tools at the local and the national levels) on adoption of a healthy lifestyle. The course started as an 8-week short course but was broadened during the year 2015–2016, to include three sessions on exercise and physical activity: the approach to medical examinations before starting a physical activity program in healthy and diseased patients; the responsibility of the physician to evaluate the level of physical activity of their patients and to encourage them to exercise (Hoffman, et al. 2016); and the comprehensive physical activity prescription, which is a required responsibility of physicians to be provided to each of their patients who enters an exercise program (Joy, et al. 2016). This last session includes the students’ writing their own exercise prescription and a practical experience in training according to this prescription. An additional topic is a two lecture session in oral hygiene and its association to systemic diseases and medications. Selected paradigms in epidemiology and public health (3rd year): Following the basic course in epidemiology and biostatistics in the first year, this intensive one-week course gives an overview of the epidemiology of specific diseases and conditions such as cancer, cardiovascular disease, diabetes, infectious diseases, geriatric and childhood diseases, maternal and child health, and psychiatric illnesses. The course emphasizes the specific methodologies used for the study of these illnesses and conditions and presents the specific disease registries available. The second part of the course focuses on the national health system, and aims to elucidate the role of the clinician as a public health promoter in the national health system. The paradigm of combining health policy with clinical decision making is emphasized, using relevant and timely examples. Tools for practicing Evidence Based Medicine (EBM) (3rd year): Tools and techniques are provided for practicing EBM, by means of workshops and simulations of real life situations. At the end of the course, the student should be able to frame a clinical question in view of a specific clinical situation, search the medical literature, obtain the most relevant material, and critically appraise the literature so as to achieve the best available solution to the clinical question. This course reinforces the competencies provided in the first and second years and requires the student to apply them. The use of epidemiologic methods in clinical decision making (3rd year): This course provides the epidemiological background to the major body organs and systems taught in the third and fourth years, while focusing on how epidemiology is used for clinical decision-making. Specific examples are presented from body systems such as the gastrointestinal and urinary tracts. The course is intended to reinforce skills covered in the first year, while exploiting the advanced stage attained in the students’ basic medical knowledge. E-learning course in planning and writing research proposals for the M.D. thesis (4th - 6th year): This electronic course is designed to provide students with the necessary competencies to develop research questions and to formulate the research methodology relevant to their MD thesis. The course is built on the knowledge and capabilities of implementing the competencies taught during previous years; and it is presented through a set of online guided tools. Clerkship in public health and epidemiology (6th year): Experiential learning in EBM in public health. During this 1 week interactive workshop the students experience the implementation of epidemiological tools from data collection and analysis to public health planning and decision making. The course includes practical examples such as prevention of cervical cancer or the implementation of various programs for secondary prevention of breast cancer and their impact on breast cancer mortality. As in other clinical clerkships, the students experience the process of decision making. In this case it relates to decisions in public health. At this stage, just before graduation, the students have most of the medical knowledge they will acquire during their MD degree. They have the ability to use clinical and epidemiological competencies to understand the broad range of considerations involved in health policy at the individual and population levels. Program evaluation The revised public health curriculum was implemented with first year students during 2013–2016/17. We have been revising and refining the courses of the first, second and third years according to feedback from students and lecturers. All courses in our school have a computerized feedback system, which is opened from the last lecture till the final exam, and is filled on a voluntary anonymous basis. In addition, meetings are held with the students’ representatives to discuss their expectations and feedback, and an attempt to integrate necessary changes in the courses is continuously performed. In the coming academic year (2017–18) the last class from the old curriculum will graduate. At the end of this year we will conduct a survey among these students during the clerkship in public health to evaluate their perceived understanding of public health topics and of the competencies we intended to convey in our curriculum. We will repeat this survey among the following class – the first to experience the full 6-years revised curriculum, and compare the results. In the future we intend to assess the quality of MD theses submitted at graduation, according to exposure to the intervention, and to compare evaluations of EBM skills during clinical clerkships. We expect more MD theses to be published as papers in peer-reviewed international journals. The committee defined 3 main goals of training of medical students according to their future needs and responsibilities: a) critical appraisal of the scientific literature to inform practice; b) conducting research using epidemiological tools and methods; and c) practicing and advocating health promotion and disease prevention in the clinic. Following these goals the main competencies physicians require were defined: Skills to appraise the quality of the various types of epidemiologic research and to acquire tools for comprehensive reading and understanding scientific literature according to EBM; Competency in efficient and precise literature search; Competency in basic statistical skills; Competency in planning and conducting research, i.e. knowledge of epidemiological methods including the various study designs, choice of an appropriate study population, methods for data collection, analysis and interpretation of study results; Competency in applying health promoting principles and strategies in the selection of disease prevention measures and recommendations; Competency in implementation of EBM techniques in public health decision making, e.g. immunizations and population screening; and Competency in examining and analyzing disease trends from a population perspective. In addition, we identified the importance of understanding the structure of health systems and of increasing the awareness of the role of the physician in these systems as a means of better pursuing the skill of practicing and advocating health promotion and disease prevention in the clinic. The committee performed an overview of all relevant education and training syllabus at the Sackler School of Medicine of the Tel-Aviv University. All lectures in each course were reviewed and overlapping topics given in more than one lecture were identified. This process also enabled detecting important topics that were absent in the curriculum. The committee met all teachers and instructors and reviewed the courses syllabus with them. Those with overlapping lectures were asked to meet and revise their courses so that no unnecessary overlaps persisted. Two new courses were planned to fill in the gaps in important topics. The entire 6 year curriculum was presented and approved, first to the faculty of the School of Public Health, and then to the faculty of Sackler School of Medicine (see Table ). The new public health curriculum in our medical school is based on a longitudinal approach and was designed to harmonize and integrate the clinical and public health teaching to increase relevance, and to address the above-mentioned competencies. The public health curriculum starts early in the first year of medical school and progresses systematically, with each year building on competencies already gained. The goal is efficient utilization of time and avoidance of repetitions. The limited timeframe allocated to public health training within the busy and competitive medical school curriculum is a constraint of the program. The courses and skills provided in the longitudinal public-health curriculum as part of the 6 year medical training of the Sackler Medical School are the following (see Fig. , illustrating the concept that epidemiology and statistics are the foundation, and are given a substantial number of hours in the curriculum, on which medical students are gradually building their public health knowledge, with the number of hours gradually decreasing yet the topics learned are more sophisticated, so that in their last year a relatively smaller, albeit very important, part of the clerkships will draw on this learning): Epidemiology, statistics, and research methods (1st year): this course was re-designed to achieve a comprehensive and integrative understanding of key epidemiologic and biostatistics methods. The goals of the course are to improve students’ abilities to understand and interpret epidemiological studies and to provide practical experience in epidemiological research, study design, and key methods in biostatistics. Topics covered in the course include: the ability to integrate information and data, build statistical models, conduct data analysis, and acquire tools for decision making in selecting diagnostic tools and treatment protocols. Also emphasized are implementation of statistical and epidemiological tools for understanding disease risk and prevention, etiology and prognosis, and evaluating the success and clinical relevance of preventive interventions. The fundamentals of biostatistics and epidemiology are taught together, highlighting the relevance of these two disciplines to the understanding and interpretation of medical data. Health promotion: The physician’s role (2nd year): This is one of two courses initiated following the committee’s detection of gaps in training medical students. Using epidemiological concepts and terms acquired during the first year, students are introduced to the main concepts, principles, and methods of health promotion at the individual and population levels. Students practice communicating and marketing healthy lifestyle to patients and gain knowledge of the impact of a health promoting environment (e.g. media campaigns, regulatory tools at the local and the national levels) on adoption of a healthy lifestyle. The course started as an 8-week short course but was broadened during the year 2015–2016, to include three sessions on exercise and physical activity: the approach to medical examinations before starting a physical activity program in healthy and diseased patients; the responsibility of the physician to evaluate the level of physical activity of their patients and to encourage them to exercise (Hoffman, et al. 2016); and the comprehensive physical activity prescription, which is a required responsibility of physicians to be provided to each of their patients who enters an exercise program (Joy, et al. 2016). This last session includes the students’ writing their own exercise prescription and a practical experience in training according to this prescription. An additional topic is a two lecture session in oral hygiene and its association to systemic diseases and medications. Selected paradigms in epidemiology and public health (3rd year): Following the basic course in epidemiology and biostatistics in the first year, this intensive one-week course gives an overview of the epidemiology of specific diseases and conditions such as cancer, cardiovascular disease, diabetes, infectious diseases, geriatric and childhood diseases, maternal and child health, and psychiatric illnesses. The course emphasizes the specific methodologies used for the study of these illnesses and conditions and presents the specific disease registries available. The second part of the course focuses on the national health system, and aims to elucidate the role of the clinician as a public health promoter in the national health system. The paradigm of combining health policy with clinical decision making is emphasized, using relevant and timely examples. Tools for practicing Evidence Based Medicine (EBM) (3rd year): Tools and techniques are provided for practicing EBM, by means of workshops and simulations of real life situations. At the end of the course, the student should be able to frame a clinical question in view of a specific clinical situation, search the medical literature, obtain the most relevant material, and critically appraise the literature so as to achieve the best available solution to the clinical question. This course reinforces the competencies provided in the first and second years and requires the student to apply them. The use of epidemiologic methods in clinical decision making (3rd year): This course provides the epidemiological background to the major body organs and systems taught in the third and fourth years, while focusing on how epidemiology is used for clinical decision-making. Specific examples are presented from body systems such as the gastrointestinal and urinary tracts. The course is intended to reinforce skills covered in the first year, while exploiting the advanced stage attained in the students’ basic medical knowledge. E-learning course in planning and writing research proposals for the M.D. thesis (4th - 6th year): This electronic course is designed to provide students with the necessary competencies to develop research questions and to formulate the research methodology relevant to their MD thesis. The course is built on the knowledge and capabilities of implementing the competencies taught during previous years; and it is presented through a set of online guided tools. Clerkship in public health and epidemiology (6th year): Experiential learning in EBM in public health. During this 1 week interactive workshop the students experience the implementation of epidemiological tools from data collection and analysis to public health planning and decision making. The course includes practical examples such as prevention of cervical cancer or the implementation of various programs for secondary prevention of breast cancer and their impact on breast cancer mortality. As in other clinical clerkships, the students experience the process of decision making. In this case it relates to decisions in public health. At this stage, just before graduation, the students have most of the medical knowledge they will acquire during their MD degree. They have the ability to use clinical and epidemiological competencies to understand the broad range of considerations involved in health policy at the individual and population levels. The revised public health curriculum was implemented with first year students during 2013–2016/17. We have been revising and refining the courses of the first, second and third years according to feedback from students and lecturers. All courses in our school have a computerized feedback system, which is opened from the last lecture till the final exam, and is filled on a voluntary anonymous basis. In addition, meetings are held with the students’ representatives to discuss their expectations and feedback, and an attempt to integrate necessary changes in the courses is continuously performed. In the coming academic year (2017–18) the last class from the old curriculum will graduate. At the end of this year we will conduct a survey among these students during the clerkship in public health to evaluate their perceived understanding of public health topics and of the competencies we intended to convey in our curriculum. We will repeat this survey among the following class – the first to experience the full 6-years revised curriculum, and compare the results. In the future we intend to assess the quality of MD theses submitted at graduation, according to exposure to the intervention, and to compare evaluations of EBM skills during clinical clerkships. We expect more MD theses to be published as papers in peer-reviewed international journals. Public Health topics have been taught by the Division of Epidemiology and Preventive Medicine ever since the Sackler School of Medicine was established. The curriculum evolved over the years “bottoms up” and when a decision was made to implement a Competency Oriented approach to the medical curriculum at large we revised our curriculum. The Sackler School of Medicine committee re-designed a comprehensive curriculum in epidemiology and public health, which covers the range of topics central for current medical students’ education in those fields. Among its goals, the revised curriculum focuses on competencies required to critically appraise medical scientific literature. The curriculum has been implemented and fits the national system of medical education, which spans over 6 years of training. Our longitudinal curriculum is based on the need for a competency-based medical education (CBME) approach and an emphasis on research methods in statistics and epidemiology, preventive medicine and the application of population health principles in medical education. This is in line with the international move towards improved integration in medical training, of public health concepts, practice and research methods. Our intended outcome is that medical school graduates will be curious and have the motivation and competencies to obtain the evidence based information they need to provide scientifically sound care to their patients; that they will have the skills to conduct research and for critically evaluating existing evidence; and that they will maximize their role in disease prevention and healthy lifestyle promotion. By having developed a longitudinal exposure for students, they are reminded at all stages of their medical education about the importance and relevance of the sciences as the basis of medical knowledge and evidence as the basis for better medical care, prevention, and public health. |
Visualizing the Cellular and Subcellular Distribution of Fms-like Tyrosine Kinase 3 (Flt3) and Other Neuronal Proteins Using Alkaline Phosphatase (AP) Immunolabeling | 5da463e1-6d0c-443b-8193-b2456922c17d | 11900488 | Biochemistry[mh] | The brain expresses the majority of known protein-coding genes . In many cases, the cellular expression and subcellular distribution of their protein products—including signaling receptors, ion channels, transporters, and synaptic proteins—remain poorly characterized. One such example is the receptor tyrosine kinase Flt3, which is critical for blood cell development and is implicated in the pathogenesis of leukemia when dysregulated . While the presence of Flt3 mRNA in the brain has been reported, the Flt3 protein expression in the brain lacks rigorous investigation . Single-cell sequencing studies of mouse and human brain tissue reveal high Flt3 mRNA expression in the cerebellum, particularly in Purkinje cells , yet the extent to which Flt3 protein expression correlates with its mRNA level within these cells remains unknown. Moreover, the subcellular localization of the Flt3 protein is a crucial determinant of signaling strength and mode , but cannot be inferred from RNA sequencing data. Our group’s previous unbiased drug screening efforts discovered that pharmacological inhibition of Flt3 kinase signaling in brain cells enhances the expression of KCC2—a neuron-specific chloride transporter protein which is essential for GABAergic inhibition and normal brain development . These findings suggest an unexpected role of Flt3 signaling in neurons and underscore the need for sensitive in situ protein detection methods to study its disposition in brain tissue comprehensively. Towards this end, we sought to develop an immunolabeling method with high sensitivity and resolution to investigate Flt3 expression patterns in neurons. Standard fluorescent immunostaining method primarily relies on fluorescent secondary antibodies which are large proteins with limited binding affinity, signal amplification, and sensitivity . One alternative method often used is Horseradish peroxidase (HRP)-based histochemistry, where HRP conjugated to a secondary antibody via avidin/biotin system catalyzes an in situ reaction to oxidize the chromogenic substrate, resulting in robust signal amplification . However, this method produces a non-fluorescent, monochromatic deposit that interferes with fluorescent light emission, making it unsuitable for simultaneous staining with other target proteins, such as cell type or subcellular organelle markers, labeled by fluorescent antibodies . To overcome this limitation, we found that the alkaline phosphatase polymer histochemistry technique (AP-IHC), invented by Vector Laboratories, was a highly sensitive staining method that could be used to develop a co-staining method with other fluorescent antibodies. AP-IHC employs secondary antibodies conjugated to multiple copies of alkaline phosphatase, enabling in situ deposition of fluorescent products for high-sensitivity protein labeling. Although AP-IHC has been used in histological analysis , it has not been applied to immunolabeling in the brain tissue, which presents unique challenges. These challenges include the brain’s complex cellular composition, high lipid content, and the presence of fine structures such as synapses where certain proteins are concentrated. Therefore, it is important to explore a strategy to apply the AP-IHC method in brain studies. In this study, we developed an optimized workflow for applying the highly sensitive AP-IHC in brain tissue and, more importantly, introduced a hybrid method that combines AP-IHC with the conventional immunofluorescent staining, enabling multiplexed in situ protein labeling. Using this hybrid approach, we systematically investigated Flt3 expression during cerebellar development alongside various proteins essential for neuronal functions. Our work provides the first demonstration of the versatility and sensitivity of AP-IHC for detecting a broad range of antigens in brain cells with cellular and subcellular resolution. Through direct comparisons with standard Alexa Fluor-conjugated antibodies and HRP-based histochemistry, we show that AP-IHC offers superior sensitivity and specificity for visualizing Flt3 kinase in both developing and adult mouse brain samples. This sensitivity, combined with the compatibility of AP-IHC co-staining with traditional immunofluorescence, enables the discovery of a neuron-specific pattern of Flt3 expression in brain tissues. We also show that AP-IHC successfully detects Kir2.1, an inward-rectifying potassium channel expressed at low levels, in mouse, pig, and human brain tissue samples. In addition, we extended using AP-IHC to label neuronal antigen PSD95, enriched in microscopic subcellular synaptic structures, in human stem cell-derived neurons. These results establish AP-IHC as a powerful and versatile tool for high-sensitivity, high-resolution in situ detection of brain proteins, facilitating detailed investigation of cellular and synaptic protein distributions in tissue or cell culture preparations across multiple species.
2.1. Establish the Spatial–Temporal Pattern of Flt3 Expression During Brain Development Using AP Immunolabeling To lay the groundwork for mechanistic studies of Flt3 signaling in the brain, we set out to define the expression pattern of Flt3 in brain tissue. Our focus was on the cerebellum, as previous studies have reported enriched Flt3 mRNA expression in this region . For this, we adapted the AP-IHC method, traditionally used in peripheral tissue analysis , for use in brain tissue. Compared to regular immunofluorescent staining (IF) with a secondary fluorescent antibody that failed to produce a consistent Flt3 expression pattern ( A–D), the AP-IHC method successfully highlighted the Flt3 protein with bright fluorescence in mouse cerebellum slices ( E–H). The staining pattern closely resembled the results obtained using the HRP method, which produces a monochromatic deposit ( I–K). Notably, the AP-IHC method revealed distinct Flt3 expression differences between the cerebellar granular layer (GL) and molecular layer (ML)—distinctions that were less discernible with the HRP method. Quantitative analysis confirmed that the AP-IHC method significantly improved Flt3 labeling sensitivity and signal-to-noise ratio, achieving a five- to seven-fold higher staining intensity compared to conventional immunofluorescence ( L). These findings demonstrate the enhanced capability of AP-IHC for investigating the spatial–temporal patterns of Flt3 expression during brain development. Among the advantages of the AP-IHC method, as we described above, a unique feature is its ability to produce fluorescent signals in the Cy3 channel without interfering with other fluorescent channels. This capability enables multiplexed co-labeling of various antigens when combined with the conventional fluorescent immunostaining. Using a hybrid protocol developed through fine-tuning detergent usage, we uncovered a previously uncharacterized pattern of Flt3 protein expression in mouse cerebellar tissue. Our results reveal that Flt3 expression is restricted to neurons, with no overlap observed in GFAP + astrocytes or Iba1 + microglia . Interestingly, Flt3 expression is particularly enriched in Purkinje cells (PCL), which co-express parvalbumin (PV) or calbindin , whereas NeuN + granular cells show only a limited level of Flt3 staining . We further validated Flt3 expression in neurons, particularly in the cerebellum inhibitory neurons, in human brain tissue. Using the AP-IHC hybrid staining method, we successfully detected Flt3 in postnatal human cerebellum sections that had been preserved in formalin for several years. Similar to that in mouse cerebellum tissues, FLT3 is enriched in human Purkinje cells and their dendrites in the molecular layer ( A). FLT3 staining is co-localized with the general neuronal marker TUJ1 (for detecting β-III tubulin) and the inhibitory neuronal marker parvalbumin (PV) ( A). In the Purkinje cell layer, about 98% FLT3 + cells are TUJ1 + , and 95% are PV + . Beside the thin sections from cryostat (10 µm in and ), or paraffin-embedded (5 µm in A) samples, the AP-IHC hybrid method can be applied for floating staining of thick mouse brain slices (40 µm, B), which are commonly used in brain studies . To confirm that the Flt3 staining is not only restricted to the tissue surface but is well distributed throughout the entire thickness of the section, we used confocal microscopy to examine the staining. Our analysis revealed that Flt3 AP-IHC immunoreactivity is evenly distributed across a series of confocal scanned sections . Consistent with staining in thin sections, the deposition from Flt3 AP-IHC in thick free-floating mouse brain sections does not interfere with other co-stained markers ( B, ). Across different species and tissue conditions, our results have shown that Flt3 expression in cerebellum is restricted to the cell body and processes of Purkinje cells and other neurons. This neuron-specific expression pattern suggests a potential specialized role for Flt3 in cerebellar function and development. Building on the finding of neuron-specific expression of Flt3, we employed the AP-IHC hybrid method to investigate the temporal dynamics of Flt3 expression during mouse brain development. Our analysis revealed a significant increase in Flt3 immunoreactivity within the cerebellum from postnatal day seven to two months of age, suggesting a developmental upregulation of Flt3 gene expression ( A,B). Importantly, the subcellular localization of Flt3 underwent a developmental shift. At early postnatal stages, Flt3 was predominantly cytosolic, while in the mature brain, a substantial proportion of the protein was observed in dendrites, indicating potential developmental changes in its functional roles . Thus, the AP-IHC hybrid method has allowed us to establish, for the first time, the brain region- and cell-type-specific expression pattern of Flt3, which aligns closely with previously reported single-cell sequencing data. Furthermore, this approach enabled the in situ detection and quantification of Flt3 subcellular localization across developmental stages, uncovering a dynamic cytosol-to-dendrite redistribution in cerebellar Purkinje cells, laying the foundation for future research on the role of Flt3 in cerebellar function and its implications for brain health and disease. 2.2. AP-IHC Immunolabeling of Potassium Channel Kir2.1 in Mouse, Pig, and Human Brain Tissues To further evaluate the versatility of the AP-IHC immunolabeling method, we explored its application in detecting Kir2.1, a voltage-gated inward-rectifying potassium channel, in brain tissue. Kir2.1 plays a crucial role in regulating the neuronal resting membrane potential but is expressed at low levels in the brain, complicating its detection . Moreover, its cell-type-specific expression patterns in the brain remain poorly characterized. We compared Kir2.1 immunostaining in mouse brain tissue using either the regular fluorescent immunolabeling or the AP-IHC method. The AP-IHC method demonstrated substantially improved labeling sensitivity and signal-to-noise (S/N) ratio ( A,B). To ensure high specificity, negative control experiments were conducted by pre-incubating the Kir2.1 primary antibody with an excessive amount of Kir2.1 blocking peptide (Kir2.1+BP) to neutralize the antibody for 30 min before applying it to tissue sections (see Method Section for details) . Expanding beyond mouse tissue, we applied the same AP-IHC immunolabeling protocol to human ( C,D) and pig cortical sections ( E,F). Consistent staining patterns across species were observed, underscoring the method’s robustness. High resolution microscopy revealed that Kir2.1 staining was localized around neurons, co-labeled with the neuronal marker NeuN ( F), suggesting that Kir2.1 expression is predominantly neuronal in the cortex. This study highlights the AP-IHC method as a highly sensitive and specific tool for detecting low-abundance membrane proteins such as Kir2.1 across species, providing new insights into their cellular and subcellular distribution in the brain. 2.3. AP Immunolabeling of PSD95 Within the Synapses in Human Stem Cell-Derived Neurons Building on the success of the AP-IHC hybrid immunolabeling method for visualizing cellular resolution protein localization, we extended its application to examine fine subcellular structures, particularly synapses. Using cultured human-induced pluripotent stem cell (iPSC)-derived neurons, which are widely applied in disease modeling and in drug screening studies , we explored the labeling of the synaptic scaffold protein PSD95 using the AP-IHC method. Our prior work demonstrated the functional maturation of synapses in human iPSC-derived neurons co-cultured with astrocytes , yet regular immunofluorescent staining cannot adequately visualize PSD95 in human neurons (unpublished results and private correspondence). In this study, MAP2 antibody was used to mark neuronal dendrites, while PSD95 staining was performed using either the standard immunofluorescence method or AP-IHC hybrid immunolabeling. Results showed that, compared to the standard immunostaining, the AP-IHC method results in robust labeling of discrete PSD95-positive synaptic puncta along the MAP2 + dendrites . The specificity of staining was confirmed by pre-incubating the PSD95 antibody with its blocking peptide (PSD95+BP), which diminished the signal . These findings highlight the potential uses of AP-IHC immunolabeling to investigate fine structures such as synapses with high sensitivity and specificity. This methodology represents a powerful tool for neuroscience research, enabling detailed studies of the cellular and subcellular distribution of important neuronal antigens.
To lay the groundwork for mechanistic studies of Flt3 signaling in the brain, we set out to define the expression pattern of Flt3 in brain tissue. Our focus was on the cerebellum, as previous studies have reported enriched Flt3 mRNA expression in this region . For this, we adapted the AP-IHC method, traditionally used in peripheral tissue analysis , for use in brain tissue. Compared to regular immunofluorescent staining (IF) with a secondary fluorescent antibody that failed to produce a consistent Flt3 expression pattern ( A–D), the AP-IHC method successfully highlighted the Flt3 protein with bright fluorescence in mouse cerebellum slices ( E–H). The staining pattern closely resembled the results obtained using the HRP method, which produces a monochromatic deposit ( I–K). Notably, the AP-IHC method revealed distinct Flt3 expression differences between the cerebellar granular layer (GL) and molecular layer (ML)—distinctions that were less discernible with the HRP method. Quantitative analysis confirmed that the AP-IHC method significantly improved Flt3 labeling sensitivity and signal-to-noise ratio, achieving a five- to seven-fold higher staining intensity compared to conventional immunofluorescence ( L). These findings demonstrate the enhanced capability of AP-IHC for investigating the spatial–temporal patterns of Flt3 expression during brain development. Among the advantages of the AP-IHC method, as we described above, a unique feature is its ability to produce fluorescent signals in the Cy3 channel without interfering with other fluorescent channels. This capability enables multiplexed co-labeling of various antigens when combined with the conventional fluorescent immunostaining. Using a hybrid protocol developed through fine-tuning detergent usage, we uncovered a previously uncharacterized pattern of Flt3 protein expression in mouse cerebellar tissue. Our results reveal that Flt3 expression is restricted to neurons, with no overlap observed in GFAP + astrocytes or Iba1 + microglia . Interestingly, Flt3 expression is particularly enriched in Purkinje cells (PCL), which co-express parvalbumin (PV) or calbindin , whereas NeuN + granular cells show only a limited level of Flt3 staining . We further validated Flt3 expression in neurons, particularly in the cerebellum inhibitory neurons, in human brain tissue. Using the AP-IHC hybrid staining method, we successfully detected Flt3 in postnatal human cerebellum sections that had been preserved in formalin for several years. Similar to that in mouse cerebellum tissues, FLT3 is enriched in human Purkinje cells and their dendrites in the molecular layer ( A). FLT3 staining is co-localized with the general neuronal marker TUJ1 (for detecting β-III tubulin) and the inhibitory neuronal marker parvalbumin (PV) ( A). In the Purkinje cell layer, about 98% FLT3 + cells are TUJ1 + , and 95% are PV + . Beside the thin sections from cryostat (10 µm in and ), or paraffin-embedded (5 µm in A) samples, the AP-IHC hybrid method can be applied for floating staining of thick mouse brain slices (40 µm, B), which are commonly used in brain studies . To confirm that the Flt3 staining is not only restricted to the tissue surface but is well distributed throughout the entire thickness of the section, we used confocal microscopy to examine the staining. Our analysis revealed that Flt3 AP-IHC immunoreactivity is evenly distributed across a series of confocal scanned sections . Consistent with staining in thin sections, the deposition from Flt3 AP-IHC in thick free-floating mouse brain sections does not interfere with other co-stained markers ( B, ). Across different species and tissue conditions, our results have shown that Flt3 expression in cerebellum is restricted to the cell body and processes of Purkinje cells and other neurons. This neuron-specific expression pattern suggests a potential specialized role for Flt3 in cerebellar function and development. Building on the finding of neuron-specific expression of Flt3, we employed the AP-IHC hybrid method to investigate the temporal dynamics of Flt3 expression during mouse brain development. Our analysis revealed a significant increase in Flt3 immunoreactivity within the cerebellum from postnatal day seven to two months of age, suggesting a developmental upregulation of Flt3 gene expression ( A,B). Importantly, the subcellular localization of Flt3 underwent a developmental shift. At early postnatal stages, Flt3 was predominantly cytosolic, while in the mature brain, a substantial proportion of the protein was observed in dendrites, indicating potential developmental changes in its functional roles . Thus, the AP-IHC hybrid method has allowed us to establish, for the first time, the brain region- and cell-type-specific expression pattern of Flt3, which aligns closely with previously reported single-cell sequencing data. Furthermore, this approach enabled the in situ detection and quantification of Flt3 subcellular localization across developmental stages, uncovering a dynamic cytosol-to-dendrite redistribution in cerebellar Purkinje cells, laying the foundation for future research on the role of Flt3 in cerebellar function and its implications for brain health and disease.
To further evaluate the versatility of the AP-IHC immunolabeling method, we explored its application in detecting Kir2.1, a voltage-gated inward-rectifying potassium channel, in brain tissue. Kir2.1 plays a crucial role in regulating the neuronal resting membrane potential but is expressed at low levels in the brain, complicating its detection . Moreover, its cell-type-specific expression patterns in the brain remain poorly characterized. We compared Kir2.1 immunostaining in mouse brain tissue using either the regular fluorescent immunolabeling or the AP-IHC method. The AP-IHC method demonstrated substantially improved labeling sensitivity and signal-to-noise (S/N) ratio ( A,B). To ensure high specificity, negative control experiments were conducted by pre-incubating the Kir2.1 primary antibody with an excessive amount of Kir2.1 blocking peptide (Kir2.1+BP) to neutralize the antibody for 30 min before applying it to tissue sections (see Method Section for details) . Expanding beyond mouse tissue, we applied the same AP-IHC immunolabeling protocol to human ( C,D) and pig cortical sections ( E,F). Consistent staining patterns across species were observed, underscoring the method’s robustness. High resolution microscopy revealed that Kir2.1 staining was localized around neurons, co-labeled with the neuronal marker NeuN ( F), suggesting that Kir2.1 expression is predominantly neuronal in the cortex. This study highlights the AP-IHC method as a highly sensitive and specific tool for detecting low-abundance membrane proteins such as Kir2.1 across species, providing new insights into their cellular and subcellular distribution in the brain.
Building on the success of the AP-IHC hybrid immunolabeling method for visualizing cellular resolution protein localization, we extended its application to examine fine subcellular structures, particularly synapses. Using cultured human-induced pluripotent stem cell (iPSC)-derived neurons, which are widely applied in disease modeling and in drug screening studies , we explored the labeling of the synaptic scaffold protein PSD95 using the AP-IHC method. Our prior work demonstrated the functional maturation of synapses in human iPSC-derived neurons co-cultured with astrocytes , yet regular immunofluorescent staining cannot adequately visualize PSD95 in human neurons (unpublished results and private correspondence). In this study, MAP2 antibody was used to mark neuronal dendrites, while PSD95 staining was performed using either the standard immunofluorescence method or AP-IHC hybrid immunolabeling. Results showed that, compared to the standard immunostaining, the AP-IHC method results in robust labeling of discrete PSD95-positive synaptic puncta along the MAP2 + dendrites . The specificity of staining was confirmed by pre-incubating the PSD95 antibody with its blocking peptide (PSD95+BP), which diminished the signal . These findings highlight the potential uses of AP-IHC immunolabeling to investigate fine structures such as synapses with high sensitivity and specificity. This methodology represents a powerful tool for neuroscience research, enabling detailed studies of the cellular and subcellular distribution of important neuronal antigens.
This study offers the first comprehensive characterization of the receptor tyrosine kinase Flt3’s expression pattern during brain development. The Flt3 molecule is well-studied in the context of hematology, particularly for its role in leukemia, where constitutive phosphorylation due to mutations drives oncogenesis . A number of Flt3 inhibitors have been developed for leukemia treatment . However, its roles in the brain remain largely unexplored, primarily due to the challenges in detecting its protein expression at the cellular resolution. Recent findings have linked Flt3 signaling to neurological processes. Pharmacological inhibition of Flt3 has been associated with increased KCC2 expression, a chloride transporter critical for GABAergic inhibition , which indicates potential treatment of neurological disorders. A recent study shows that Flt3 is expressed in the Purkinje cells in the cerebellum tissue . Using our optimized hybrid AP-IHC immunolabeling method, we found that the Flt3 protein in the cerebellum is predominantly expressed in the GABAergic Purkinje neurons. During cerebellar development, Flt3 localization in Purkinje cells shifts from primarily cytosolic at early stages of brain development to a dendrite-enriched pattern in the molecular layer of mature brains. These findings align with previous evidence that Flt3 modulates neuronal gene expression and highlight its potential involvement in brain development and neurological diseases. This study not only provides strong evidence to resolve the difficulties of visualizing Flt3’s neuronal expression patterns but also lays a foundation for further research into its functional roles in the brain. These insights may open new avenues for leveraging Flt3-targeting therapeutics in neurodevelopmental and neurodegenerative disorders. Through this study, we developed a hybrid method combining the high sensitivity of AP-polymer-based histochemistry with standard immunofluorescent staining, enabling robust co-staining of multiple markers. HRP histochemistry, while highly sensitive and useful in detecting proteins expressed at low levels, has traditionally been limited by the difficulty of co-staining, which is often required for mechanistic investigation. This limitation arises because standard histochemical reactions yield chromogenic products, such as brown (DAB substrate) or blue (hematoxylin), which overlap and cannot be separated effectively for multiplex analyses . Additionally, the color palette is inherently restricted. The AP-IHC methodology overcomes these limitations by offering two key advantages: (1) enhanced sensitivity: the use of micro-polymer technology significantly amplifies staining sensitivity, making it highly effective for low-abundance proteins. (2) High fluorescent output: the fluorescent substrate of AP enables compatibility with fluorescence microscopy, facilitating multiplexed visualization. Our hybrid method utilizes a sequential staining protocol, where AP polymer histochemistry labels weakly expressed proteins, while immunofluorescent staining is applied to mark additional antigens. The fine tuning of detergent usage, as detailed in the Methods section, is crucial for the success of this technique. This approach allows researchers to visualize distinct fluorescent signals with different wavelength filters, bypassing the limitations of traditional chromogenic co-staining methods. We anticipate that this hybrid method will be a valuable tool for the research community, offering a practical solution for studies requiring high sensitivity and multiplexed analysis of protein expression. A potential limitation of the AP-IHC method is that it may not effectively label all low-abundance proteins. Therefore, ensuring staining specificity should always be a top priority. To minimize the risk of false-positive signals, a negative control should always be included in parallel with each staining experiment. Ideally, the negative control should be on the same glass slide, adjacent to the sections stained with specific antibodies, to ensure identical processing conditions. An isotype-matched IgG negative control should be used as the negative control instead of simply omitting the primary antibody. Only the staining signals significantly above the negative control can be considered positive and specific. Besides specificity, it is also important to assess whether AP-IHC staining capabilities extend beyond the rabbit host reagents used in this study. Additionally, exploring AP-IHC’s compatibility with hybrid co-staining, such as proposed in this study, could expand its utility for research. In this study, we demonstrated the feasibility and versatility of applying a highly sensitive AP-IHC plus conventional immunofluorescence staining method to label a variety of proteins crucial to neuronal function. Using this method, we achieved detailed immunolabeling of the developmental trajectories and protein localization at cellular and subcellular resolutions for Flt3 and several other key neuronal proteins. We established the neuron-enriched expression pattern of the inward-rectifying potassium channel Kir2.1 in mouse brain tissue, a feature conserved in pig and human brain tissues. This cross-species consistency underscores its functional relevance and provides a cellular basis for targeting Kir2.1 in therapeutic interventions. The AP-IHC hybrid immunolabeling method proved capable of resolving synaptic-scale structures by labeling the post-synaptic density protein PSD95 in human stem cell-derived neurons. This advancement opens new possibilities for studying synapse-level organization and protein interactions, an area critical to understanding neuronal connectivity and plasticity . The AP-IHC hybrid immunolabeling method demonstrates broad applicability across various sample types, including mouse, pig, and human brain tissue as well as cultured human stem cell-derived neurons. Its ability to enhance detection sensitivity while resolving complex subcellular structures makes it a powerful tool for neuroscience research. Our results underscore the utility of in situ protein detection techniques, such as AP immunolabeling, for bridging the gap between transcriptomic data and protein-level insights. By validating RNA sequencing results and providing spatial and functional context for the protein products of genes of interest, this approach offers a powerful complement to existing genomic tools and datasets. It enables precise mapping of protein localization and dynamics within neuronal systems, thereby advancing our understanding of brain development and synaptic organization. Furthermore, this method holds significant potential for identifying and characterizing therapeutic targets in translational neuroscience research.
4.1. Animals Normal wild-type C57BL/6J mice (0–2 months old, Jackson labs, Bar Harbor, ME, USA: Strain #:000664) were used for generating the mouse brain tissue slices. Animal housing and perfusion procedures for tissue harvesting followed the policies at Boston Children’s Hospital (BCH) and were approved by the BCH Institutional Animal Care and Use Committee. 4.2. Tissue Sample Preparation Mouse, pig, and human brain tissue sections were used in this study. Mouse brain sections were primarily used to compare three different immunostaining methods (described below), establish the AP-IHC method and hybrid co-immunostaining methods, and examine the expression pattern of the Flt3 protein during brain development. Pig and human brain sections were included to evaluate the applicability of the staining method across different species and tissue sample storage/processing conditions. For the mouse brain section preparation, mice were euthanized by overdose of isoflurane and neck dislocation, and perfused transcardially with cold saline (0.9% NaCl) followed by 4% paraformaldehyde (PFA) at all required ages. The perfused brains were post-fixed in 4% PFA for 2 days at 4 °C, then changed to 30% sucrose at least twice until cryostat sectioning. For most studies, the O.C.T.-embedded mouse brains were cut at 10 µm in thickness sagittally using the Leica cryostat machine (Leica CM3050S, Leica Biosystems, Deer Park, IL, USA) and the sections were directly mounted onto the VWR superfrost glass slides (see ). In the floating staining experiment, the brain blocks were cut at 40 µm in thickness and the slices were collected in PBS tubes. Paraffin-embedded pig brain coronal sections were acquired from Dr. Jianhua Qiu in the Mannix lab, Boston Children’s Hospital. Formalin-fixed human brain tissues were acquired from the BCH pathology department and processed in the Pathology Core of Beth Israel Hospital for paraffin embedding and sectioning (5 µm in thickness). Two brain sections were mounted on each glass slide. The sample size and usage principle for each experiment depended on the specific goals of the study. Following the ARRIVE guidelines 2.0, we provide detailed information for each experiment below. : The objective was to compare 3 different staining methods. In each round of staining, we used neighboring mouse brain sections to ensure identical tissue conditions, with the only variable being different staining methods. Images were taken from the same cerebellar region to facilitate direct comparison. Data from were obtained from three independent rounds of staining using brain sections from three adult mice. : This experiment aimed to establish the hybrid co-staining method and examine the cellular expression pattern of Flt3. Sections from four postnatal D14 mouse brains (equal distribution of males and females) were used. : Human brain sections were obtained from a single tissue block, and multiple rounds of staining were conducted to validate AP-IHC performance and test the co-staining with other antibodies, as many antibodies were not working well in long-term fixed human brain samples ( A). In B, thick brain sections were sourced from the same animals in . The goal was to demonstrate that the AP-IHC is effective in floating section staining. : This experiment quantitatively assessed Flt3 expression during development. We analyzed Flt3 expression at postnatal D7, D14, D21, and D60, using 3–4 mice per time point (approximately equal male-to-female ratio), totaling 15 mouse brains. In each brain, we chose at least 3 pairs of cerebellum sections >100 µm apart, resulting in 15 × 3 = 45 pairs of sections for the staining. Within each pair, one section was stained for Flt3+NeuN, while the adjacent section on the same glass slide served as a negative control (for measuring the staining baseline). In each stained cerebellum section, including both the negative control and specific Flt3 staining, 3–4 different areas were imaged, and their average intensity was considered as a representative. In all of the sections’ imaging processes, the exposure time was kept same and was initially determined by the most suitable time for the sections stained Flt3 with the highest fluorescence to avoid overexposure, then this time was applied in all imaging. The Flt3 intensity on each glass slide was first averaged by its own 3 areas, then normalized by the negative control of the neighboring section, and finally counted into the groups for the statistical analysis. : This experiment tested the AP-IHC method’s ability to detect Kir2.1, a challenging protein to stain for. In A,B, motor cortex sections from three adult mice (same animals as the D60 group in ) were used for staining comparison. As in , neighboring section pairs were selected to compare standard immunofluorescent staining with AP-IHC. Each pair included one section stained for Kir2.1+NeuN and another control section that underwent the same staining in the presence of the Kir2.1 antibody blocking peptide. C,D compared human cortex sections all sourced from a single tissue block. E,F were from a single pig brain. : All samples were from cultured human iPSC-derived neurons. Coverslips from the same culture batch were stained simultaneously to compare different staining methods for detecting PSD95, a protein well known to be difficult to detect through immunostaining in human neurons. 4.3. Human Stem Cell-Derived Neuron Culture and Cell Sample Preparation Human embryonic stem cell (hESC)-derived neural progenitor cells (NPCs, generated from the WIBR1 hESC line) were seeded at 10,000 cells per well onto glass coverslips pre-seeded with an astrocyte feeder layer and cultured for 5 months for PSD95 staining. 4.4. Immunolabeling Three different immunolabeling methods were used in this work: the regular immunofluorescent staining, HRP-based histochemistry, and the AP polymer-based histochemistry hybrid with the regular fluorescent staining. All information on primary antibodies, secondary antibodies, as well as other key materials and reagents are summarized in , respectively. Regular immunofluorescent staining The cryostat brain sections were heated on a heating plate for 20 min (37–42 °C) to dry out the wetness from the freezer, and O.C.T. was washed off in Tris-buffered saline (TBS). The pig and human brain paraffin-embedded sections went through xylene and a series of different concentrations of ethanol and were washed in TBS. All brain slides went through antigen retrieval at 95–100 °C for 10 min, then incubation with the blocking buffer (5% serum + 2% BSA in TBS-T (0.1% Tween-20)) for 1 h at room temperature, primary antibody incubation overnight at 4 °C and fluorescent secondary antibodies, and were then coverslipped with DAPI-fluoromount mounting medium. HRP-IHC Similar to the regular immunofluorescent staining, the brain section slides went through antigen retrieval and were incubated with 0.1% H 2 O 2 to inactivate the endogenous HRP. After blocking, one section on each slide was incubated with the Flt3 primary antibody at 4 °C overnight and the other section on the same slide was used as the staining control, which used the same amount of rabbit IgG to replace the Flt3 rabbit antibody. On the second day, after the TBS-T wash, the slides were incubated with goat anti-rabbit biotinylated secondary antibody, followed by VECTASTAIN ABC Reagent, and developed in freshly made DAB peroxidase substrate solution. Monitor the color development under the microscope until satisfied by comparing with the control section on the same slide. The stained slides went through a series of ETOH and Xylene, and were coverslipped with Cytoseal. AP-IHC hybrid with regular fluorescent immunostaining For the co-staining of Flt3 (rabbit) with NeuN (mouse), GFAP (chicken), Iba1 (goat), parvalbumin (chicken), and calbindin (goat) in brain sections, we used AP-IHC to boost Flt3’s signal and co-stained with other cell specific markers. The brain slides were processed the same as in the above HRP-IHC method (except we omitted the H 2 O 2 step) until adding the secondary antibody. Instead of the biotinylated secondary antibody in HRP histochemistry, the horse anti-rabbit AP-conjugated secondary antibody was used for 10 min at room temperature. After a brief wash with TBS-T and TBS, we developed AP with the substrate solution, monitored the color under the microscope, and stopped the reaction by an immediate TBS wash. AP-IHC produced the red fluorescent precipitance (wavelength similar to Alexa 594). The brain slides continued to be incubated with other fluorescent secondary antibodies for the co-stained fluorescence (e.g., Alexa 488, 647, etc.), and after brief washes, were then coverslipped with DAPI-fluoromount mounting medium. For the thick brain slices, the floating staining was carried out in the Eppendorf tubes or in the 24-well-plate wells. The same procedures as for the slice-mounted slides were used, except the high-pH washing buffer TBS-T or TBS (made from Trizma base without adjusting pH), in order to minimize the background. After the co-staining was finished, we mounted the brain slices onto the glass slides and coverslipped them with DAPI-fluoromount mounting medium. The negative control tube was also set up and parallel-processed in each staining. For cultured cell staining, the procedures were the same as above, except using PBS-T (0.2% Triton X-100) and PBS to replace TBS-T and TBS. The negative control cell coverslip was also set up and parallel-processed in each staining. There are several critical points to make a successful co-staining with this hybrid method: (1) after the AP-IHC color development, no detergent buffer is allowed to be used in the subsequent co-staining procedures, since the detergent destroys the precipitate pattern from AP-IHC. (2) Always have a negative staining control with the same amount of IgG to replace the primary antibody on the other brain section of the same glass slide. (3) Similar to the HRP histochemistry, endogenous AP inactivation should be considered. Heating slides (like antigen retrieval) is a good way to inactivate the endogenous AP. If the staining protocol does not include heating, an AP inhibitor (Levamisole) may be considered. 4.5. Imaging and Quantification With our strict principle for the control setting in each staining of every single slide (two sections on one slide; using same amount of IgG to replace the antibody; using blocking peptide to pre-incubate with antibody, exact same development time, etc.), images from the positive staining and the control staining were always taken with the exact same setting for exposure conditions. Most images in this study were taken under a regular fluorescent microscope (10×) and some of them (floating staining of thick brain slices and PSD95 synapse staining images) were taken with the Zeiss LSM 980 w/Airyscan 2 confocal microscope (10× for thick slice imaging, 63× for PSD95 synapse imaging, Oberkochen, Germany). For each of the large pig brain slices, a few hundred images were taken and stitched together automatedly with the Zeiss Axio Imager Z2 microscope (Oberkochen, Germany). For the staining intensity quantification, the single channel images and other co-stained channel images were first merged together with Image J 3 (1.53K) in order to draw the outlines of the exact areas for the quantification. After outlined the measuring areas, the merged images were split the colors and only the target channel (for example, the red channel of Flt3) was used to measure the intensity. On each glass slide, the measured intensity data were first normalized by their own staining control data from the other section on the same slide, then they were included in the groups. The detailed sample numbers depended on the different experiments and are described above in the sample size part. PSD95 puncta quantification was carried out with FIJI (version 1.0) puncta measurement procedure. All images generated from microscopes and graphs from GraphPad Prism 10 (version 10.2.3) or Microsoft Excel (version 16.94) were assembled with Adobe Photoshop 2024. 4.6. Statistics GraphPad Prism 10 was used for all statistical analysis. Detailed analysis is described in each figure legend.
Normal wild-type C57BL/6J mice (0–2 months old, Jackson labs, Bar Harbor, ME, USA: Strain #:000664) were used for generating the mouse brain tissue slices. Animal housing and perfusion procedures for tissue harvesting followed the policies at Boston Children’s Hospital (BCH) and were approved by the BCH Institutional Animal Care and Use Committee.
Mouse, pig, and human brain tissue sections were used in this study. Mouse brain sections were primarily used to compare three different immunostaining methods (described below), establish the AP-IHC method and hybrid co-immunostaining methods, and examine the expression pattern of the Flt3 protein during brain development. Pig and human brain sections were included to evaluate the applicability of the staining method across different species and tissue sample storage/processing conditions. For the mouse brain section preparation, mice were euthanized by overdose of isoflurane and neck dislocation, and perfused transcardially with cold saline (0.9% NaCl) followed by 4% paraformaldehyde (PFA) at all required ages. The perfused brains were post-fixed in 4% PFA for 2 days at 4 °C, then changed to 30% sucrose at least twice until cryostat sectioning. For most studies, the O.C.T.-embedded mouse brains were cut at 10 µm in thickness sagittally using the Leica cryostat machine (Leica CM3050S, Leica Biosystems, Deer Park, IL, USA) and the sections were directly mounted onto the VWR superfrost glass slides (see ). In the floating staining experiment, the brain blocks were cut at 40 µm in thickness and the slices were collected in PBS tubes. Paraffin-embedded pig brain coronal sections were acquired from Dr. Jianhua Qiu in the Mannix lab, Boston Children’s Hospital. Formalin-fixed human brain tissues were acquired from the BCH pathology department and processed in the Pathology Core of Beth Israel Hospital for paraffin embedding and sectioning (5 µm in thickness). Two brain sections were mounted on each glass slide. The sample size and usage principle for each experiment depended on the specific goals of the study. Following the ARRIVE guidelines 2.0, we provide detailed information for each experiment below. : The objective was to compare 3 different staining methods. In each round of staining, we used neighboring mouse brain sections to ensure identical tissue conditions, with the only variable being different staining methods. Images were taken from the same cerebellar region to facilitate direct comparison. Data from were obtained from three independent rounds of staining using brain sections from three adult mice. : This experiment aimed to establish the hybrid co-staining method and examine the cellular expression pattern of Flt3. Sections from four postnatal D14 mouse brains (equal distribution of males and females) were used. : Human brain sections were obtained from a single tissue block, and multiple rounds of staining were conducted to validate AP-IHC performance and test the co-staining with other antibodies, as many antibodies were not working well in long-term fixed human brain samples ( A). In B, thick brain sections were sourced from the same animals in . The goal was to demonstrate that the AP-IHC is effective in floating section staining. : This experiment quantitatively assessed Flt3 expression during development. We analyzed Flt3 expression at postnatal D7, D14, D21, and D60, using 3–4 mice per time point (approximately equal male-to-female ratio), totaling 15 mouse brains. In each brain, we chose at least 3 pairs of cerebellum sections >100 µm apart, resulting in 15 × 3 = 45 pairs of sections for the staining. Within each pair, one section was stained for Flt3+NeuN, while the adjacent section on the same glass slide served as a negative control (for measuring the staining baseline). In each stained cerebellum section, including both the negative control and specific Flt3 staining, 3–4 different areas were imaged, and their average intensity was considered as a representative. In all of the sections’ imaging processes, the exposure time was kept same and was initially determined by the most suitable time for the sections stained Flt3 with the highest fluorescence to avoid overexposure, then this time was applied in all imaging. The Flt3 intensity on each glass slide was first averaged by its own 3 areas, then normalized by the negative control of the neighboring section, and finally counted into the groups for the statistical analysis. : This experiment tested the AP-IHC method’s ability to detect Kir2.1, a challenging protein to stain for. In A,B, motor cortex sections from three adult mice (same animals as the D60 group in ) were used for staining comparison. As in , neighboring section pairs were selected to compare standard immunofluorescent staining with AP-IHC. Each pair included one section stained for Kir2.1+NeuN and another control section that underwent the same staining in the presence of the Kir2.1 antibody blocking peptide. C,D compared human cortex sections all sourced from a single tissue block. E,F were from a single pig brain. : All samples were from cultured human iPSC-derived neurons. Coverslips from the same culture batch were stained simultaneously to compare different staining methods for detecting PSD95, a protein well known to be difficult to detect through immunostaining in human neurons.
Human embryonic stem cell (hESC)-derived neural progenitor cells (NPCs, generated from the WIBR1 hESC line) were seeded at 10,000 cells per well onto glass coverslips pre-seeded with an astrocyte feeder layer and cultured for 5 months for PSD95 staining.
Three different immunolabeling methods were used in this work: the regular immunofluorescent staining, HRP-based histochemistry, and the AP polymer-based histochemistry hybrid with the regular fluorescent staining. All information on primary antibodies, secondary antibodies, as well as other key materials and reagents are summarized in , respectively. Regular immunofluorescent staining The cryostat brain sections were heated on a heating plate for 20 min (37–42 °C) to dry out the wetness from the freezer, and O.C.T. was washed off in Tris-buffered saline (TBS). The pig and human brain paraffin-embedded sections went through xylene and a series of different concentrations of ethanol and were washed in TBS. All brain slides went through antigen retrieval at 95–100 °C for 10 min, then incubation with the blocking buffer (5% serum + 2% BSA in TBS-T (0.1% Tween-20)) for 1 h at room temperature, primary antibody incubation overnight at 4 °C and fluorescent secondary antibodies, and were then coverslipped with DAPI-fluoromount mounting medium. HRP-IHC Similar to the regular immunofluorescent staining, the brain section slides went through antigen retrieval and were incubated with 0.1% H 2 O 2 to inactivate the endogenous HRP. After blocking, one section on each slide was incubated with the Flt3 primary antibody at 4 °C overnight and the other section on the same slide was used as the staining control, which used the same amount of rabbit IgG to replace the Flt3 rabbit antibody. On the second day, after the TBS-T wash, the slides were incubated with goat anti-rabbit biotinylated secondary antibody, followed by VECTASTAIN ABC Reagent, and developed in freshly made DAB peroxidase substrate solution. Monitor the color development under the microscope until satisfied by comparing with the control section on the same slide. The stained slides went through a series of ETOH and Xylene, and were coverslipped with Cytoseal. AP-IHC hybrid with regular fluorescent immunostaining For the co-staining of Flt3 (rabbit) with NeuN (mouse), GFAP (chicken), Iba1 (goat), parvalbumin (chicken), and calbindin (goat) in brain sections, we used AP-IHC to boost Flt3’s signal and co-stained with other cell specific markers. The brain slides were processed the same as in the above HRP-IHC method (except we omitted the H 2 O 2 step) until adding the secondary antibody. Instead of the biotinylated secondary antibody in HRP histochemistry, the horse anti-rabbit AP-conjugated secondary antibody was used for 10 min at room temperature. After a brief wash with TBS-T and TBS, we developed AP with the substrate solution, monitored the color under the microscope, and stopped the reaction by an immediate TBS wash. AP-IHC produced the red fluorescent precipitance (wavelength similar to Alexa 594). The brain slides continued to be incubated with other fluorescent secondary antibodies for the co-stained fluorescence (e.g., Alexa 488, 647, etc.), and after brief washes, were then coverslipped with DAPI-fluoromount mounting medium. For the thick brain slices, the floating staining was carried out in the Eppendorf tubes or in the 24-well-plate wells. The same procedures as for the slice-mounted slides were used, except the high-pH washing buffer TBS-T or TBS (made from Trizma base without adjusting pH), in order to minimize the background. After the co-staining was finished, we mounted the brain slices onto the glass slides and coverslipped them with DAPI-fluoromount mounting medium. The negative control tube was also set up and parallel-processed in each staining. For cultured cell staining, the procedures were the same as above, except using PBS-T (0.2% Triton X-100) and PBS to replace TBS-T and TBS. The negative control cell coverslip was also set up and parallel-processed in each staining. There are several critical points to make a successful co-staining with this hybrid method: (1) after the AP-IHC color development, no detergent buffer is allowed to be used in the subsequent co-staining procedures, since the detergent destroys the precipitate pattern from AP-IHC. (2) Always have a negative staining control with the same amount of IgG to replace the primary antibody on the other brain section of the same glass slide. (3) Similar to the HRP histochemistry, endogenous AP inactivation should be considered. Heating slides (like antigen retrieval) is a good way to inactivate the endogenous AP. If the staining protocol does not include heating, an AP inhibitor (Levamisole) may be considered.
With our strict principle for the control setting in each staining of every single slide (two sections on one slide; using same amount of IgG to replace the antibody; using blocking peptide to pre-incubate with antibody, exact same development time, etc.), images from the positive staining and the control staining were always taken with the exact same setting for exposure conditions. Most images in this study were taken under a regular fluorescent microscope (10×) and some of them (floating staining of thick brain slices and PSD95 synapse staining images) were taken with the Zeiss LSM 980 w/Airyscan 2 confocal microscope (10× for thick slice imaging, 63× for PSD95 synapse imaging, Oberkochen, Germany). For each of the large pig brain slices, a few hundred images were taken and stitched together automatedly with the Zeiss Axio Imager Z2 microscope (Oberkochen, Germany). For the staining intensity quantification, the single channel images and other co-stained channel images were first merged together with Image J 3 (1.53K) in order to draw the outlines of the exact areas for the quantification. After outlined the measuring areas, the merged images were split the colors and only the target channel (for example, the red channel of Flt3) was used to measure the intensity. On each glass slide, the measured intensity data were first normalized by their own staining control data from the other section on the same slide, then they were included in the groups. The detailed sample numbers depended on the different experiments and are described above in the sample size part. PSD95 puncta quantification was carried out with FIJI (version 1.0) puncta measurement procedure. All images generated from microscopes and graphs from GraphPad Prism 10 (version 10.2.3) or Microsoft Excel (version 16.94) were assembled with Adobe Photoshop 2024.
GraphPad Prism 10 was used for all statistical analysis. Detailed analysis is described in each figure legend.
|
TANK prevents IFN-dependent fatal diffuse alveolar hemorrhage by suppressing DNA-cGAS aggregation | 87bfd822-927c-4f6f-8805-bb78f0d81fa6 | 8616552 | Anatomy[mh] | Systemic lupus erythematosus (SLE) is one of the autoimmune diseases characterized by a complex clinical syndrome comprising vasculitis, glomerulonephritis, skin rashes, nervous system symptoms, and diffuse alveolar hemorrhage (DAH) ( ). In SLE, autoantibodies such as anti-Sm and anti–double-stranded DNA (dsDNA) Abs are produced and deposited in tissues as the immune complex, causing inflammation and organ damage. Aberrant production of cytokines is critical for the pathogenesis of SLE. Particularly, type I IFNs are strongly associated with human SLE, and the levels of type I IFNs and the expression of type I IFN regulated genes (IFN signatures) are correlated with the pathogenesis of SLE ( ). Furthermore, proinflammatory cytokines such as IL-6, IL-17, and IL-18 are also implicated in SLE ( ). Thus, the aberrant production of type I IFNs and proinflammatory cytokines is central to the pathology of SLE. Understanding the regulatory mechanisms of this production is of great value for the therapeutic development of SLE. DAH is a rare but serious life-threatening complication of SLE ( ; ). DAH accompanied with SLE is mainly induced by pulmonary capillaritis, which leads to the disruption of the membrane integrity of capillary walls and the leakage of blood into alveoli. The patients with DAH suffer from hypoxemia, dyspnea, cough and hemoptysis, often with concomitant infections. It is postulated that the deposition of immune complexes to the alveolar walls and pulmonary vessels contribute to the development ( ). However, the detailed mechanisms how DAH is induced in SLE and which cytokines contribute to this pathogenesis is still unclear. Type I IFNs and cytokines are mainly produced by innate immune cells including macrophages and DCs including plasmacytoid DCs (pDCs) ( ; ). These cells activate the transcription of type I IFN and cytokine genes after the detection of pathogen-associated molecular patterns via pattern-recognition receptors (PRRs) including TLRs, RIG-I-like receptors (RLRs), and cyclic GMP-AMP (cGAMP) synthase (cGAS), which signal through adaptor proteins, MyD88 or TRIF, MAVS, and STING (stimulator of IFN genes), respectively ( ; ; ; ; ). Whereas these PRRs trigger distinct intracellular signaling pathways, all of the pathways converge to the activation of transcription factors IFN-regulatory factor (IRF) 3/7 as well as NF-κB, transactivating type I IFNs and proinflammatory cytokines, respectively. The PRR signaling pathways are negatively regulated by various cellular proteins. TRAF-family member associated NF-κB activator (TANK), also known as I-TRAF, is one of such proteins suppressing the NF-κB pathway downstream of the TLR signaling by inhibiting the ubiquitination of TNF receptor-associated factor 6 (TRAF6) ( ; ; ). Tank -deficient mice spontaneously develop lupus-like glomerular nephritis and production of autoantibodies, which is mediated by IL-6 ( ). Moreover, TANK associates with TANK binding kinase 1 (TBK1) and IκB kinase- i /-ε, kinases critical for the production of type I IFN by phosphorylating IRF-3/-7 ( ; ; ). Although TANK is reported to be required for the induction of type I IFNs ( ; ), production of type I IFNs in response to Newcastle disease virus (NDV), an RNA virus, was not impaired in Tank -deficient DCs ( ), indicating that TANK is dispensable for the RIG-I signaling pathway in innate immune cells. In human, an SNP in TANK is associated with SLE in Swedish cohort, suggesting that TANK is also involved in the pathogenesis of human SLE ( ). Nevertheless, it remains unknown if TANK regulates IFN responses upon the stimulation to other PRRs than RIG-I and how this regulation contributes to autoimmune pathogenesis. In this study, we investigated the role of TANK in a pristane (2.3.-tetramethylpentadecan, TMPD)-induced lupus model, and found that TANK is essential for the prevention of fatal DAH by inhibiting lung vascular endothelial cell death. TANK deficiency resulted in the enhanced expression of type I IFNs in innate immune cells after pristane treatment, and the type I IFN signaling is essential for lethality induced by pristane treatment under TANK deficiency. The STING signaling pathway activated by intracellular dsDNA is negatively regulated by TANK in addition to the TLR-MyD88-TRAF6 pathway, and STING is critical for pristane-induced severe DAH under Tank deficiency. Mechanistically, TANK functions to suppress formation of dsDNA-cGAS aggregation. Together, our study revealed that TANK is critical for preventing pristane-induced fatal DAH in mice via the negative regulation of cGAS-dependent recognition of cytoplasmic DNA.
Development of fatal DAH in Tank −/− mice after pristane treatment To determine the involvement of TANK in the development of DAH, we took advantage of pristane, which induces SLE-like autoimmune disease including DAH in mice depending on type I IFN signaling ( ; ). We first examined whether pristane exacerbates the phenotypes of Tank −/− mice. Surprisingly, Tank −/− mice started to die 8 d after pristane treatment and eventually the mortality rate of pristane-treated Tank −/− mice increased to 89%, whereas most wild-type (WT) mice survived at 40 d after pristane injection ( ). We observed severe DAH in about 70% of Tank −/− mice 7 d after pristane treatment, although WT mice developed DAH much less frequently at this time point ( ). Most of WT mice caused DAH at 14 d after treatment, although eventually recovered and did not succumb to the DAH ( ). Tank −/− mice showed severe anemia at 7 d after pristane treatment, whereas WT mice did not decrease hemoglobin levels even at 14 d after pristane when they show DAH ( ). These results suggest that Tank −/− , but not WT, mice caused fatal anemia due to DAH. Histological analysis revealed that pristane treatment induced vasculitis in Tank −/− mice, but not in WT, as evidenced by the fragmentation of leukocytes ( ) and IgM and complement C3 deposition in perivascular lesion ( ). Given that microvascular inflammation in the pulmonary capillary is suggested to be the cause of DAH ( ), we hypothesized that vascular endothelial cells were damaged in Tank −/− mice after pristane treatment. Indeed, terminal deoxynucleotidyl transferase dUTP nick end labeling (TUNEL) staining showed that pristane treatment highly increased TUNEL-positive pulmonary vascular endothelial cells in Tank −/− lung compared with WT at days 1 and 6 ( ), indicating that apoptosis of vascular endothelial cells was induced in Tank −/− lung in response to pristane treatment. In contrast to the development of severe DAH, pristane treatment of Tank −/− mice did not lead to the cause of hepatic, pancreatic, or acute renal failure as examined by the serum levels of transaminases (AST and ALT), urea nitrogen (BUN), creatinine (Cre), and albumin (Alb), or histological changes in glomeruli and heart ( ). These data demonstrate that pristane causes the apoptosis of pulmonary vascular epithelial cells which leads to fatal DAH in mice under TANK deficiency. Type I IFN signaling but not IL-6 mediates fatal DAH in Tank −/− mice after pristane treatment Because TANK is involved in the regulation of the innate immune signaling pathway ( ), we next investigated the contribution of cytokines involved in the pristine-induced death in Tank −/− mice. Although IL-6 is critical for the spontaneous development of glomerular nephritis and autoantibody production in Tank −/− mice, IL-6 deficiency failed to improve the survival rate of Tank −/− mice nor the prevalence of DAH ( ). In sharp contrast, the abrogation of type I IFN signaling by the lack of IFN receptor ( Ifnar2 ) ameliorated DAH, and dramatically rescued Tank −/− mice from pristane-induced death ( ). Thus, the type I IFN signaling, but not IL-6, is critical for the development of pristane-induced fatal DAH under TANK deficiency. Given that TANK suppresses production of autoantibodies and natural Abs, we examined if elevated Ab production is involved in DAH under Tank deficiency by measuring serum anti-dsDNA Ab and total IgG1 and IgM Abs. Although lack of IL-6 decreased the production of these Abs in Tank −/− mice ( ) ( ), the abrogation of the type I IFN signaling by Ifnar2 deficiency did not reduce, but rather increased the levels of Abs ( ). Furthermore, 6-mo-old Tank −/− Ifnar2 −/− mice developed glomerulonephritis with mesangial cell proliferation and expansion of the mesangial matrix ( ), whereas the absence of IL-6 completely prevented the mice from glomerulonephritis as previously reported ( ). These data demonstrate that pristane-induced lethality of Tank −/− mice requires the type I IFN signaling, but not IL-6, which is in contrast to the requirement of these cytokines to the development of autoimmunity under Tank deficiency. TANK suppresses the recruitment of innate immune cells to peritoneal cavity and IFN induction Then we investigated the cell type(s) producing type I IFNs in pristane-treated Tank −/− mice. We have previously demonstrated that CD11b + Ly6C high cells (Ly6C high monocytes) are recruited to peritoneal cavity after intraperitoneal injection of pristane, and Ly6C high monocytes are the major source of type I IFN which is critical for autoimmunity in WT mice ( ). First, intraperitoneal pristane treatment recruited slightly higher numbers of peritoneal exudate cells (PECs) in Tank −/− mice compared with WT mice ( ). FACS analysis revealed that the proportion and the number of Ly6C high monocytes were increased in Tank −/− mice compared with WT, whereas the number as well as proportion of CD11b + Ly6G high neutrophils was comparable between Tank −/− and WT mice ( ). pDCs are also known to produce large amounts of type I IFN upon viral infection ( ). Pristane-induced pDC number was also increased in Tank −/− mice compared with WT mice at 1 d after pristane treatment ( ). Because pristane treatment induces type I IFNs in myeloid cells recruited to the peritoneal cavity ( ), we next examined the activation of IFN signatures in PECs after pristane treatment. Interestingly, the expression level of Ifnb1 and IFN inducible genes (ISGs), such as Isg15 and Cxcl10 are higher in Tank −/− PECs than WT ( ). These findings indicate that TANK suppresses pristane-induced recruitment of Ly6C high monocytes and pDCs to peritoneal cavity and increased the production of type I IFNs. TANK inhibits type I IFN responses mediated by TLR and cGAS signaling These results prompted us to investigate the molecular mechanisms how TANK controls pristane-mediated expression of type I IFNs in innate immune cells. The TLR7-MyD88 pathway is known to contribute to the production of type I IFNs in monocytes after pristane treatment in vivo ( ). Although TLR7 does not directly recognize pristane, the responsiveness against TLR7 ligands was augmented by the treatment of cells with pristane. RNA molecules from dying cells are implicated as the ligands for TLR7 ( ). Indeed, TANK expression suppressed the IRF7-induced IFN-β promoter activity which is induced downstream of MyD88 and TRAF6 ( ). Reciprocally, Tank -deficient BM pDCs showed much higher expression of Ifnb1 and Irf7 in response to TLR7 and TLR9 ligands ( ). Besides TLRs, cytoplasmic nucleic acid sensors, RIG-I-like receptors (RLRs) and cGAS, induce production of type I IFNs via MAVS- and STING-dependent signaling pathways, respectively ( ; ; ). Consistent with our previous report ( ), the induction of ISGs against transfection of Poly (I:C), a dsRNA analogue activating RLRs, was not elevated in the absence of TANK in conventional BMDCs ( ). We next investigated if TANK is potent to modify responses against introduction of cytoplasmic dsDNAs. To our surprise, Tank −/− DCs and macrophages expressed higher amount of ISGs including Ifnb1 , Isg15 , and Cxcl10 in response to dsDNA transfection compared with WT cells ( ). These results demonstrate that TANK suppresses type I IFN responses induced by TLR and cGAS, but not RLR, signaling. STING signaling is critical for pristane-induced lethality in Tank −/− mice Then, we examined the contribution of TLR, RLR, and cGAS signaling pathways in pristane-induced lethality under Tank deficiency. We found that the deletion of TLR7 or MyD88 resulted in the modest improvement of the survival in Tank −/− mice after pristane treatment ( ), suggesting that the signaling pathway(s) other than TLR7 is responsible for the pathogenesis. Moreover, the involvement of the RLR-MAVS pathway is also quite modest, which is consistent with the aforementioned data ( ). In sharp contrast, the deficiency of STING ( Tmem173 ), the downstream molecule of cGAS signaling, greatly reduced pristane-induced mortality on Tank −/− mice ( ). The augmentation of peritoneal Ly6C high monocyte recruitment in response to pristane under Tank deficiency was ameliorated by the co-deletion of STING ( ). On the other hand, the recruitment of neutrophils and pDCs was only modestly affected by the absence of TANK and STING ( ), suggesting that STING specifically affect the recruitment of Ly6C high monocytes in Tank −/− mice. Furthermore, Tmem173 deficiency suppressed pristane-induced expression of Ifnb1 , Isg15 , and Cxcl10 in Tank −/− PECs ( ). These results clearly demonstrate that exacerbation of pristane-induced lethality and the production of type I IFNs in Tank −/− mice depends on the pathway mediated by STING. TANK suppresses viral and endogenous dsDNA-induced type I IFN responses Then we examined if TANK regulates IFN responses induced by the recognition of exogenous and endogenous dsDNAs via cGAS. First, the expression of ISGs against Vaccinia virus (VACV), a DNA virus, but not to NDV, an RNA virus, was augmented in Tank −/− GM-CSF-induced DCs ( ). The results indicate that TANK is potent to suppress induction of type I IFNs in response to DNA, but not RNA, virus infection. In the case of pristane-induced IFN responses in monocytes, the cGAS-STING pathway is supposed to be activated by endogenous DNA, which can be released from mitochondria or from extranuclear chromatin forming micronuclei generated due to genotoxic stress ( ; ; ). Consistent with this notion, Tank −/− macrophages shows elevated expression of Ifnb compared with WT cells in response to mitochondrial DNA induced by ABT737 (pan Bcl-2 inhibitor) and Z-VAD-Fmk (caspase inhibitor) treatment ( ). Collectively, the results demonstrate that TANK suppresses type I IFN responses induced by both exogenous and endogenous cGAS ligands. When we checked the activation of TBK1, a kinase activated downstream of STING, Tank −/− PECs exhibited elevated phosphorylation of TBK1 as well as IRF3 in response to Herring testis DNA stimulation compared with WT cells ( ). Thus, TANK restricts dsDNA-induced IFN reactions by suppressing signaling pathways upstream of the TBK1 phosphorylation. TANK may inhibit generation of DNA-cGAS aggregates harboring ubiquitination We then investigated the mechanisms how TANK specifically suppresses dsDNA-mediated IFN responses. We initially hypothesized that TANK inhibits the activation of TBK1 by suppressing signaling of STING, which interacts with cyclic GMP-AMP (cGAMP), a second messenger generated by cGAS via the recognition of dsDNA ( ; ). However, cGAMP stimulation-induced ISG expression was comparable between WT and Tank -deficient macrophages, whereas dsDNA stimulation induced more type I IFN and ISGs in Tank −/− macrophages ( ). In addition, stimulation with DMXAA (5,6-dimethyl-9-oxo-9H-xanthene-4-acetic acid), a murine STING agonist ( ; ), resulted in the comparable expression of ISGs between WT and Tank −/− macrophages ( ). Thus, TANK is suggested to suppress the IFN responses against dsDNA stimulation upstream of the STING activation. Consistently, the production of cGAMP in response to dsDNA stimulation was increased in Tank −/− macrophages compared with WT cells ( ). After transfection of cells with dsDNA, cGAS, and dsDNA form puncta which represent cGAS undergoing a liquid-like phase transition ( ; ). Interestingly, although the expression of cGAS was comparable between WT and Tank −/− macrophages ( ), the numbers of cGAS-DNA puncta were significantly increased in Tank −/− macrophages than WT ( ). Nevertheless, TANK failed to coprecipitate cGAS in HEK293 cells even when expressed together with STING and TBK1 ( ). Reciprocally, cGAS did not coprecipitate TANK even in response to dsDNA stimulation ( ). TANK suppresses TRAF6 ubiquitination in the TLR signaling, and cGAS was reported to be positively and negatively regulated by its ubiquitination ( ; ; ; ; ). Indeed, the cGAS-DNA aggregates are co-stained with ubiquitin both in WT and Tank −/− macrophages ( ), and the numbers of cGAS-ubiquitin aggregates were significantly increased in Tank −/− macrophages compared with WT cells ( ). These results suggest that TANK contributes to the restriction of cGAS-ubiquitin aggregates in macrophages.
Tank −/− mice after pristane treatment To determine the involvement of TANK in the development of DAH, we took advantage of pristane, which induces SLE-like autoimmune disease including DAH in mice depending on type I IFN signaling ( ; ). We first examined whether pristane exacerbates the phenotypes of Tank −/− mice. Surprisingly, Tank −/− mice started to die 8 d after pristane treatment and eventually the mortality rate of pristane-treated Tank −/− mice increased to 89%, whereas most wild-type (WT) mice survived at 40 d after pristane injection ( ). We observed severe DAH in about 70% of Tank −/− mice 7 d after pristane treatment, although WT mice developed DAH much less frequently at this time point ( ). Most of WT mice caused DAH at 14 d after treatment, although eventually recovered and did not succumb to the DAH ( ). Tank −/− mice showed severe anemia at 7 d after pristane treatment, whereas WT mice did not decrease hemoglobin levels even at 14 d after pristane when they show DAH ( ). These results suggest that Tank −/− , but not WT, mice caused fatal anemia due to DAH. Histological analysis revealed that pristane treatment induced vasculitis in Tank −/− mice, but not in WT, as evidenced by the fragmentation of leukocytes ( ) and IgM and complement C3 deposition in perivascular lesion ( ). Given that microvascular inflammation in the pulmonary capillary is suggested to be the cause of DAH ( ), we hypothesized that vascular endothelial cells were damaged in Tank −/− mice after pristane treatment. Indeed, terminal deoxynucleotidyl transferase dUTP nick end labeling (TUNEL) staining showed that pristane treatment highly increased TUNEL-positive pulmonary vascular endothelial cells in Tank −/− lung compared with WT at days 1 and 6 ( ), indicating that apoptosis of vascular endothelial cells was induced in Tank −/− lung in response to pristane treatment. In contrast to the development of severe DAH, pristane treatment of Tank −/− mice did not lead to the cause of hepatic, pancreatic, or acute renal failure as examined by the serum levels of transaminases (AST and ALT), urea nitrogen (BUN), creatinine (Cre), and albumin (Alb), or histological changes in glomeruli and heart ( ). These data demonstrate that pristane causes the apoptosis of pulmonary vascular epithelial cells which leads to fatal DAH in mice under TANK deficiency.
Tank −/− mice after pristane treatment Because TANK is involved in the regulation of the innate immune signaling pathway ( ), we next investigated the contribution of cytokines involved in the pristine-induced death in Tank −/− mice. Although IL-6 is critical for the spontaneous development of glomerular nephritis and autoantibody production in Tank −/− mice, IL-6 deficiency failed to improve the survival rate of Tank −/− mice nor the prevalence of DAH ( ). In sharp contrast, the abrogation of type I IFN signaling by the lack of IFN receptor ( Ifnar2 ) ameliorated DAH, and dramatically rescued Tank −/− mice from pristane-induced death ( ). Thus, the type I IFN signaling, but not IL-6, is critical for the development of pristane-induced fatal DAH under TANK deficiency. Given that TANK suppresses production of autoantibodies and natural Abs, we examined if elevated Ab production is involved in DAH under Tank deficiency by measuring serum anti-dsDNA Ab and total IgG1 and IgM Abs. Although lack of IL-6 decreased the production of these Abs in Tank −/− mice ( ) ( ), the abrogation of the type I IFN signaling by Ifnar2 deficiency did not reduce, but rather increased the levels of Abs ( ). Furthermore, 6-mo-old Tank −/− Ifnar2 −/− mice developed glomerulonephritis with mesangial cell proliferation and expansion of the mesangial matrix ( ), whereas the absence of IL-6 completely prevented the mice from glomerulonephritis as previously reported ( ). These data demonstrate that pristane-induced lethality of Tank −/− mice requires the type I IFN signaling, but not IL-6, which is in contrast to the requirement of these cytokines to the development of autoimmunity under Tank deficiency.
Then we investigated the cell type(s) producing type I IFNs in pristane-treated Tank −/− mice. We have previously demonstrated that CD11b + Ly6C high cells (Ly6C high monocytes) are recruited to peritoneal cavity after intraperitoneal injection of pristane, and Ly6C high monocytes are the major source of type I IFN which is critical for autoimmunity in WT mice ( ). First, intraperitoneal pristane treatment recruited slightly higher numbers of peritoneal exudate cells (PECs) in Tank −/− mice compared with WT mice ( ). FACS analysis revealed that the proportion and the number of Ly6C high monocytes were increased in Tank −/− mice compared with WT, whereas the number as well as proportion of CD11b + Ly6G high neutrophils was comparable between Tank −/− and WT mice ( ). pDCs are also known to produce large amounts of type I IFN upon viral infection ( ). Pristane-induced pDC number was also increased in Tank −/− mice compared with WT mice at 1 d after pristane treatment ( ). Because pristane treatment induces type I IFNs in myeloid cells recruited to the peritoneal cavity ( ), we next examined the activation of IFN signatures in PECs after pristane treatment. Interestingly, the expression level of Ifnb1 and IFN inducible genes (ISGs), such as Isg15 and Cxcl10 are higher in Tank −/− PECs than WT ( ). These findings indicate that TANK suppresses pristane-induced recruitment of Ly6C high monocytes and pDCs to peritoneal cavity and increased the production of type I IFNs.
These results prompted us to investigate the molecular mechanisms how TANK controls pristane-mediated expression of type I IFNs in innate immune cells. The TLR7-MyD88 pathway is known to contribute to the production of type I IFNs in monocytes after pristane treatment in vivo ( ). Although TLR7 does not directly recognize pristane, the responsiveness against TLR7 ligands was augmented by the treatment of cells with pristane. RNA molecules from dying cells are implicated as the ligands for TLR7 ( ). Indeed, TANK expression suppressed the IRF7-induced IFN-β promoter activity which is induced downstream of MyD88 and TRAF6 ( ). Reciprocally, Tank -deficient BM pDCs showed much higher expression of Ifnb1 and Irf7 in response to TLR7 and TLR9 ligands ( ). Besides TLRs, cytoplasmic nucleic acid sensors, RIG-I-like receptors (RLRs) and cGAS, induce production of type I IFNs via MAVS- and STING-dependent signaling pathways, respectively ( ; ; ). Consistent with our previous report ( ), the induction of ISGs against transfection of Poly (I:C), a dsRNA analogue activating RLRs, was not elevated in the absence of TANK in conventional BMDCs ( ). We next investigated if TANK is potent to modify responses against introduction of cytoplasmic dsDNAs. To our surprise, Tank −/− DCs and macrophages expressed higher amount of ISGs including Ifnb1 , Isg15 , and Cxcl10 in response to dsDNA transfection compared with WT cells ( ). These results demonstrate that TANK suppresses type I IFN responses induced by TLR and cGAS, but not RLR, signaling.
Tank −/− mice Then, we examined the contribution of TLR, RLR, and cGAS signaling pathways in pristane-induced lethality under Tank deficiency. We found that the deletion of TLR7 or MyD88 resulted in the modest improvement of the survival in Tank −/− mice after pristane treatment ( ), suggesting that the signaling pathway(s) other than TLR7 is responsible for the pathogenesis. Moreover, the involvement of the RLR-MAVS pathway is also quite modest, which is consistent with the aforementioned data ( ). In sharp contrast, the deficiency of STING ( Tmem173 ), the downstream molecule of cGAS signaling, greatly reduced pristane-induced mortality on Tank −/− mice ( ). The augmentation of peritoneal Ly6C high monocyte recruitment in response to pristane under Tank deficiency was ameliorated by the co-deletion of STING ( ). On the other hand, the recruitment of neutrophils and pDCs was only modestly affected by the absence of TANK and STING ( ), suggesting that STING specifically affect the recruitment of Ly6C high monocytes in Tank −/− mice. Furthermore, Tmem173 deficiency suppressed pristane-induced expression of Ifnb1 , Isg15 , and Cxcl10 in Tank −/− PECs ( ). These results clearly demonstrate that exacerbation of pristane-induced lethality and the production of type I IFNs in Tank −/− mice depends on the pathway mediated by STING.
Then we examined if TANK regulates IFN responses induced by the recognition of exogenous and endogenous dsDNAs via cGAS. First, the expression of ISGs against Vaccinia virus (VACV), a DNA virus, but not to NDV, an RNA virus, was augmented in Tank −/− GM-CSF-induced DCs ( ). The results indicate that TANK is potent to suppress induction of type I IFNs in response to DNA, but not RNA, virus infection. In the case of pristane-induced IFN responses in monocytes, the cGAS-STING pathway is supposed to be activated by endogenous DNA, which can be released from mitochondria or from extranuclear chromatin forming micronuclei generated due to genotoxic stress ( ; ; ). Consistent with this notion, Tank −/− macrophages shows elevated expression of Ifnb compared with WT cells in response to mitochondrial DNA induced by ABT737 (pan Bcl-2 inhibitor) and Z-VAD-Fmk (caspase inhibitor) treatment ( ). Collectively, the results demonstrate that TANK suppresses type I IFN responses induced by both exogenous and endogenous cGAS ligands. When we checked the activation of TBK1, a kinase activated downstream of STING, Tank −/− PECs exhibited elevated phosphorylation of TBK1 as well as IRF3 in response to Herring testis DNA stimulation compared with WT cells ( ). Thus, TANK restricts dsDNA-induced IFN reactions by suppressing signaling pathways upstream of the TBK1 phosphorylation.
We then investigated the mechanisms how TANK specifically suppresses dsDNA-mediated IFN responses. We initially hypothesized that TANK inhibits the activation of TBK1 by suppressing signaling of STING, which interacts with cyclic GMP-AMP (cGAMP), a second messenger generated by cGAS via the recognition of dsDNA ( ; ). However, cGAMP stimulation-induced ISG expression was comparable between WT and Tank -deficient macrophages, whereas dsDNA stimulation induced more type I IFN and ISGs in Tank −/− macrophages ( ). In addition, stimulation with DMXAA (5,6-dimethyl-9-oxo-9H-xanthene-4-acetic acid), a murine STING agonist ( ; ), resulted in the comparable expression of ISGs between WT and Tank −/− macrophages ( ). Thus, TANK is suggested to suppress the IFN responses against dsDNA stimulation upstream of the STING activation. Consistently, the production of cGAMP in response to dsDNA stimulation was increased in Tank −/− macrophages compared with WT cells ( ). After transfection of cells with dsDNA, cGAS, and dsDNA form puncta which represent cGAS undergoing a liquid-like phase transition ( ; ). Interestingly, although the expression of cGAS was comparable between WT and Tank −/− macrophages ( ), the numbers of cGAS-DNA puncta were significantly increased in Tank −/− macrophages than WT ( ). Nevertheless, TANK failed to coprecipitate cGAS in HEK293 cells even when expressed together with STING and TBK1 ( ). Reciprocally, cGAS did not coprecipitate TANK even in response to dsDNA stimulation ( ). TANK suppresses TRAF6 ubiquitination in the TLR signaling, and cGAS was reported to be positively and negatively regulated by its ubiquitination ( ; ; ; ; ). Indeed, the cGAS-DNA aggregates are co-stained with ubiquitin both in WT and Tank −/− macrophages ( ), and the numbers of cGAS-ubiquitin aggregates were significantly increased in Tank −/− macrophages compared with WT cells ( ). These results suggest that TANK contributes to the restriction of cGAS-ubiquitin aggregates in macrophages.
In the present study, we demonstrate that TANK is critical for the prevention of severe DAH caused by pristane treatment, the experimental SLE model, in mice. Treatment of Tank -deficient mice with pristane-induced massive vascular epithelial cell death in the lung depending on the signaling through type I IFNs, but not IL-6. TANK deficiency resulted in the enhanced recruitment and expression of type I IFNs in inflammatory monocytes and pDCs after pristane treatment. STING is required for pristane-induced fatal DAH in Tank −/− mice, and TANK functions to suppress cytoplasmic dsDNA-induced IFN responses via the inhibition of DNA-cGAS aggregates. Collectively, this study contributes to the clarification of the mechanism of DAH in SLE and a regulatory role of TANK in type I IFN production. Besides lupus-like glomerular nephritis, DAH is caused in about 60–70% of pristane-treated mice after 2–4 wk under C57BL/6 background, although the DAH is not generally fatal in WT mice ( ). The contribution of B cells in DAH was demonstrated by the reduced prevalence of DAH in Igμ −/− mice, although T cells are dispensable ( ; ). Consistently, IgM and C3 are known to be required for DAH ( ). In addition, Mac-1–mediated conversion of macrophages toward classically activated macrophages promotes DAH induced by pristane ( ). Indeed, a recent single cell sequencing study of lung immune cells in DAH revealed the influx of myeloid cells to the lung and inflammatory monocytes play central roles in DAH ( ). Because myeloid cells such as inflammatory monocytes express type I IFNs in response to pristane, TANK expressed in inflammatory monocytes is likely to contribute to the prevention of DAH via controlling type I IFN production as well as the recruitment of inflammatory monocytes. Among cytokines, IL-10 was reported to be important for the prevention of DAH ( ). On the other hand, TLR and type I IFN signaling pathways are dispensable for the development of DAH in WT mice ( ; ; ). Nevertheless, in the absence of Tank , the type I IFN signaling was essential for DAH pathogenesis. These observations suggest that the type I IFN responses are also involved in the pathogenesis of DAH, when levels of type I IFNs and the signaling were increased by TANK deficiency. Nevertheless, further studies are required to uncover the mechanism how increased type I IFNs contribute to the development of DAH. It is well known that type I IFN production is correlated with the pathogenesis of human SLE ( ). Given that DAH is one of severe complication of SLE, it is possible that type IFNs also contribute to the development of DAH in human SLE patients. In this study, we found that the cGAS-STING pathway contributes to the DAH in pristane-treated Tank −/− mice. By using Trex1 −/− mice, a mouse model of human Aicardi-Goutieres Syndrome and SLE, the importance of the cGAS signaling in autoimmunity has been well demonstrated ( ). Similarly, the cGAS-STING pathway is activated by the mutations of RNase H2 found in Aicardi-Goutieres Syndrome patients ( ; ). Therefore, the cGAS-STING pathway might be suppressed by TANK even in human SLE patients to prevent pathogenesis of autoimmunity and the development of DAH. The open question is the source of endogenous DNA recognized by cGAS contributing to the cause of DAH. Self-DNAs from damaged cells after pristane treatment potentially activate cGAS, and indeed we found that the type I IFNs induced by mitochondrial DNA from damaged cells are suppressed by TANK. Interestingly, it was reported that apoptosis-derived membrane vesicles from SLE sera activates the cGAS-STING pathway to induce type I IFNs in human SLE patients ( ). It is interesting to further explore if TANK inhibits endogenous DNA-mediated immune responses in human SLE patients. We found that pristane-induced recruitment of Ly6C high monocytes in Tank −/− mice depends on the presence of STING. It was reported that the type I IFN signaling induced production of chemokines such as CCL2, CCL7, and CCL12, which recruits Ly6C high monocytes via the interaction with CCR2 on the cells ( ). Thus, the STING pathway can contribute to the recruitment of Ly6C high monocytes via the production of chemokines activating CCR2 through the production of type I IFNs. Another unanswered question is what cell type(s) are directly activated by endogenous DNA via the cGAS-STING pathway in response to pristane treatment. In addition to Ly6C high monocytes, pDCs and neutrophils are also recruited to peritoneal cavity in response to pristane treatment. Future analysis of mice lacking STING in specific cell types will clarify the cells initially activated by endogenous DNA via STING pathway in response to pristane treatment. TANK is known to interact with TBK1, and TANK is reported to be required for the activation of type I IFN induction. In contrast, this study, together with a previous report ( ), clearly demonstrates that TANK is not a positive regulator for the induction of type I IFN in response to various PRRs in innate immune cells. To the contrary, TANK deficiency led to the elevation of type I IFN induction in response to transfection of dsDNAs, but not to dsRNAs. Surprisingly, TANK does not directly control the activation of TBK1 or STING, suggesting that TANK suppresses the dsDNA-mediated signaling upstream of STING. Cytoplasmic dsDNA induces multimerization of cGAS culminating to the liquid–liquid phase separation forming puncta of DNA-cGAS in the cells ( ). The cGAS catalytic activity to generate cGAMP is known to be controlled by posttranslational modifications including ubiquitination ( ). Although TANK did not coprecipitate cGAS in the cytoplasm, TANK deficiency augmented the numbers of DNA-cGAS aggregates with ubiquitin. cGAS is reported to be polyubiquitinated by several E3 ubiquitin ligases, RNF185, TRIM56, TRIM41, and TRIM14 ( ; ; ; ). RNF185-mediated K27-linked polyubiquitination as well as TRIM56- and TRIM41-mediated monoubiquitination promotes cGAS activation ( ; ; ). On the other hand, cGAS also undergoes K48-linked polyubiquitination inducing autophagic degradation, which is inhibited by deubiquitinases USP14 and USP27X ( ; ). Considering that TANK is potent to inhibit TRAF6 K63-linked polyubiquitination, TANK may suppress ubiquitination which leads to the formation of the DNA-cGAS aggregates to inhibit cGAMP generation, although we cannot exclude the possibility that TANK suppresses cGAS signaling independent of ubiquitination. Future studies will uncover the precise mechanisms how TANK specifically controls the IFN responses against cytoplasmic dsDNA stimulation. Besides the cGAS-STING pathway, the lack of TLR7 or MyD88 also ameliorated pristane-induce lethality in TANK-deficient mice, although the contribution of TLR7 was less than STING. TLR7 was reported to be involved in the pristane-induced type I IFN production in inflammatory monocytes ( ), and TANK suppresses cytokine production downstream of TLR7 like other TLRs. Thus, not only dsDNA, but also RNA derived from damaged cells after pristane treatment seems to be involved in the development of DAH under TANK deficiency. Depletion of the type I IFN signaling did not ameliorate, but rather exacerbated autoantibody production in Tank -deficient mice. Although type I IFNs are critical for autoimmunity in various mouse models and even human SLE, there are autoimmunity models which develop independent of type I IFNs. These include experimental autoimmune encephalitis, DNase II deficiency and TLR7-mediated lupus nephritis mouse models, indicating that type I IFN-independent pathways contribute to the autoimmunity at certain autoimmune conditions. Given that the NF-kB signaling pathway is also negatively regulated by TANK, enhanced activation of proinflammatory cytokines by NF-kB, but not type I IFNs, via the lack of TANK can be more important for the development of long-term autoimmunity. DAH is a rare but serious complication of SLE. The mortality of DAH in SLE is ranged 0–62%. The mechanisms for the development of DAH in SLE are not well understood. Although there are many reports about therapies for DAH in SLE that include cyclophosphamide, plasmapheresis, extracorporeal membrane oxygenation (ECMO), rituximab, mycophenolate mofetil, recombinant factor VII, and stem cell transplantation, they are general immune suppression or salvage therapies ( ). Thus, the clarification of the pathology of DAH is necessary for the identification of novel therapeutic targets. Interestingly, SNPs of TANK are associated with human SLE, together with other genes related with the regulation of the type I IFN responses like IKBKE , STAT1 , IL8 , and TRAF6 ( ). Thus, TANK might serve as a novel therapeutic target of human SLE, especially to prevent the development of DAH. Furthermore, inhibition of the type I IFN signaling or the cGAS-STING pathway is another potential targets for the treatment of DAH, which can be initially analyzed by the use of Tank −/− mice treated with the neutralizing Ab for type I IFNs as well as the inhibition of STING by antagonists like H-151 ( ). In summary, we here demonstrate that TANK functions as a critical suppresser of pristane-induced development of fatal DAH by controlling type I IFN responses via the cGAS-STING pathway by regulating DNA-cGAS aggregate formation. Further studies will uncover the mechanisms how TANK contribute to the complex pathogenicity of DAH, and may pave the way to regulate this serious complication of SLE.
Mice Tank −/− mice were generated as previously described ( ). Il6 −/− , Ifnar2 −/− , Tlr7 −/− , Myd88 −/− , and Tmem173 gt/gt mice were as described ( ; ; ) and mice at ages between 8 and 16 wk under C57BL/6 background were used for the analysis. The animal experiments were approved by the Committee for Animal Experiments of Graduate School of Medicine, Kyoto University. WT and indicated mutant mice received a single intraperitoneal (i.p.) injection of 0.5 ml 2,6,10,14-tetramethylpentadecane (TMPD; pristane; Sigma-Aldrich). Reagents dsDNA (ISD, HSV-60) was purchased from InvivoGen. Poly (I:C) was obtained from GE Healthcare. Herring testis DNA and Poly (U) was from Sigma-Aldrich, CpG-DNA (D19), R-848, 2′3′-Cyclic GMP-AMP (cGAMP), and 5,6-dimethylxanthenone-4-acetic acid (DMXAA) were purchased from InvivoGen. ABT737 and Z-VAD-Fmk were purchased from Santa Cruz and R&D Systems, respectively. NDV was as described previously ( ). Vaccinia virus DIE strain was kindly provided by Dr. K Ishii, National Institute of Infectious Diseases. Abs specific to phospho-TBK1, phospho-IRF3, and β-actin were purchased from Cell Signaling Technology or Santa Cruz Biotechnology. Abs for flow cytometry including anti-mouse CD4, CD31, CD62L, CD44, CD45, B220, CD138, CD11b, Ly6C, and Ly6G were purchased from BD Biosciences or BioLegend. Anti-mouse PDCA1 Ab was obtained from Miltenyi Biotec. Plasmids for the IFN-β-promoter luciferase reporter, MyD88, TRAF6, IRF7, and TANK expression are described previously ( ; ). Flow cytometry For flow cytometry analyses, PECs or splenocytes were stained with Ghost Dye Violet 510 reagent (Tonbo Biosciences) according to the manufacturer’s instruction to exclude dead cells, and then treated with Ab cocktail solution containing anti-mouse CD16/CD32 (BioLegend) Ab and stained with indicated Abs. For the analysis of TUNEL-positive cells by flow cytometry, lung single cell suspension was prepared as described previously ( ), and subjected to terminal deoxynucleotidyl transferase dUTP nick end labeling (TUNEL) staining using FragEL DNA Fragmentation detection kit (Calbiochem) according to the manufacturer’s instructions, together with anti-mouse CD45 and anti-mouse CD31 Abs. Data were obtained by using FACSVerse flow cytometers (BD Biosciences) or LSRFotessa X-20 (BD Biosciences). Data were analyzed with FlowJo software (FlowJo, LLC). Lung pathology Lung of pristane-treated mice were prepared at indicated time and fixed with formalin. The fixed tissues were paraffin embedded and sectioned followed by staining with hematoxylin & eosin (H&E). The development of DAH was evaluated by gross inspection of excised lung and confirmed microscopically. For the analysis of apoptotic cells, tissue sections were antigen retrieved and analyzed by TUNEL staining using FragEL DNA Fragmentation detection kit (Calbiochem) according to the manufacturer’s instructions. Preparation of mouse cells PECs were isolated from the peritoneal cavities of mice 3 d after injection with 2 ml of 4.0% thioglycolate medium (Sigma-Aldrich). BM cells were isolated from femurs and were cultured in RPMI 1640 medium supplemented with 10% FCS, 50 μM 2-ME, and 100 ng/ml Flt3L (BioLegend) for 7 d. Floating cells were collected with gentle agitation and used as BM-pDCs. BM cells cultured in RPMI 1640 medium supplemented with 20% FCS, 50 μM 2-ME, and 10 ng/ml GM-CSF (Peprotech) for 6 d with the replacement of culture media on day 2 and 4 were used as BMDCs. BM cells cultured in RPMI 1640 medium supplemented with 20% FCS, 50 μM 2-ME, and 20 ng/ml M-CSF (BioLegend) for 6 d were used as BM-derived macrophages. Digitonin permeabilization was used to deliver cGAMP into cultured cells as previously described ( ). Gene expression analysis RNA from PECs from pristane-treated mice or BMDCs were prepared using TRIzol reagent (Thermo Fisher Scientific) according to manufacturer’s protocol. Then cDNA was generated with the ReverTra Ace (Toyobo). The reverse transcription reaction was subsequently used as a template for real-time PCR. Real-time PCR assays were performed on StepOnePlus (Applied Biosystems) using SYBR Green PCR master mix (Toyobo) according to the manufacturer’s protocol. Data were normalized to Actb . The following primers were used: Actb forward; 5′-ATGCTCCCCGGGCTGTAT-3′, Actb reverse; 5′-CATAGGATCCTTCTGACCCATTC-3′, Ifnb1 forward; 5′-CAGCTCCAAGAAAGGACGAAC-3′, Ifnb1 reverse; 5′-GGCAGTGTAACTCTTCTGCAT-3′, Isg15 forward; 5′-GGTGTCCGTGACTAACTCCAT-3′, Isg15 reverse 5′-TGGAAAGGGTAAGACCGTCCT-3′, Cxcl10 forward; 5′-ATGCTGCCGTCATTTTCTG-3′, Cxcl10 reverse; 5′-ATTCTCACTGGCCCGTCAT-3′, Irf7 forward; 5′-TGCAGTACAGCCACATACTGG-3′, Irf7 reverse; 5′-CTCTAAACACGGTCTTGCTC-3′. Quantification of anti-dsDNA Ab and total IgM and IgG1 by ELISA Serum anti-dsDNA Ab levels were determined by ELISA as described previously ( ). Briefly, plates were coated with 5 μg/ml calf thymus dsDNA (Sigma-Aldrich). Sera were added to the plate and further incubated with AP-conjugated anti-mouse IgG Ab after washing. Then the AP substrate (Sigma-Aldrich) was added, and absorbance was measured at 405 nm. Anti-dsDNA concentrations were quantified according to the standard curve. Concentrations of total IgM and IgG1 levels in the sera were determined by ELISA as described previously ( ). Luciferase reporter assay HEK293 cells on 24 well plates were transiently transfected with the 100 ng Ifnb promoter reporter and 20 ng control Renilla luciferase plasmids together with indicated plasmids with or without a 100 ng TANK expression plasmid. The amounts of total transfected DNA were adjusted to 445 ng/ml with a pcDNA3.1(+) empty plasmid (Mock). Cell lysates were prepared 48 h after transfection and the luciferase activity was measured by using the Dual-luciferase reporter assay system (Promega) following the manufacturer’s protocol. The Renilla luciferase reporter plasmid was simultaneously transfected as an internal control. Immunoblot analysis Cells were lysed in a lysis buffer containing 1% Nonidet P-40, 150 mM NaCl, 20 mM Tris–HCl (pH 7.5), 1 mM EDTA, and a protease inhibitor cocktail (Roche). Lysates were separated by SDS–PAGE and transferred onto polyvinylidene difluoride membranes (Bio-Rad). After membranes were blotted with Abs, proteins on membranes were visualized with Luminata Forte Western HRP Substrate (Millipore). Luminescence data were obtained by ImageQuant LAS 4000 (GE Healthcare). Intensities of p-TBK1 and p-IRF3 bands were quantified by using ImageJ software. Measurement of cGAMP concentration Macrophages treated with dsDNA for 2 h were lysed in M-PER Mammalian Protein Extraction Reagent (Thermo Fisher Scientific) buffer and used for the measurement of cGAMP concentration by using the 2′3′-cGAMP ELISA Kit (Cayman Chemical) according to the manufacturer’s instructions. Protein concentrations in the cell lysates were measured using Pierce BCA Protein Assay Kit (Thermo Fisher Scientific) and was used to normalize cGAMP concentrations. Immunofluorescence Macrophages were seeded on cover slops placed on 24 well plates to 1 × 10 5 cells/well, and fixed with 3% paraformaldehyde in PBS for 10 min, incubated with 50 mM NH 4 Cl in PBS for 10 min, permeabilized with 0.5% Triton X-100 for 10 min, and blocked with 2% normal goat serum (Dako) and 0.1% gelatin in PBS. Primary Abs to cGAS (D-9; Santa Cruz), FITC-conjugated mono- and polyubiquitinylated conjugates monoclonal Ab (FK2; Enzo) were used for staining in combination with secondary Ab conjugated to Alexa 568 goat anti-Mouse IgG (H + L) (Invitrogen). Cy3-labeled dsDNA were generated by annealing Cy3-labeled sense and anti-sense ssDNA oligos in an annealing buffer (20 mM Tris–HCl, pH 7.5, 50 mM NaCl) ramping down from 95°C to 25°C at 1°C/min, followed by NaOAc and ethanol precipitation. Annealed dsDNA oligos were resuspended into a desired buffer for experimental use. Images were captured on a TCS SPE confocal microscopes (Leica) and analyzed with the LAS-AF software (Leica). For the quantification of cGAS and dsDNA colocalized-puncta and the quantification of integrated densities of C3 and IgM merged images, images were analyzed using ImageJ (National Institutes of Health) and the Cell Counter plugin. The ratio between puncta in WT and Tank KO BMDMs was plotted using GraphPad Prism 8. Statistical analysis Statistical significance was calculated with the two-tailed t test or log-rank test. P -values of less than 0.05 were considered significant.
Tank −/− mice were generated as previously described ( ). Il6 −/− , Ifnar2 −/− , Tlr7 −/− , Myd88 −/− , and Tmem173 gt/gt mice were as described ( ; ; ) and mice at ages between 8 and 16 wk under C57BL/6 background were used for the analysis. The animal experiments were approved by the Committee for Animal Experiments of Graduate School of Medicine, Kyoto University. WT and indicated mutant mice received a single intraperitoneal (i.p.) injection of 0.5 ml 2,6,10,14-tetramethylpentadecane (TMPD; pristane; Sigma-Aldrich).
dsDNA (ISD, HSV-60) was purchased from InvivoGen. Poly (I:C) was obtained from GE Healthcare. Herring testis DNA and Poly (U) was from Sigma-Aldrich, CpG-DNA (D19), R-848, 2′3′-Cyclic GMP-AMP (cGAMP), and 5,6-dimethylxanthenone-4-acetic acid (DMXAA) were purchased from InvivoGen. ABT737 and Z-VAD-Fmk were purchased from Santa Cruz and R&D Systems, respectively. NDV was as described previously ( ). Vaccinia virus DIE strain was kindly provided by Dr. K Ishii, National Institute of Infectious Diseases. Abs specific to phospho-TBK1, phospho-IRF3, and β-actin were purchased from Cell Signaling Technology or Santa Cruz Biotechnology. Abs for flow cytometry including anti-mouse CD4, CD31, CD62L, CD44, CD45, B220, CD138, CD11b, Ly6C, and Ly6G were purchased from BD Biosciences or BioLegend. Anti-mouse PDCA1 Ab was obtained from Miltenyi Biotec. Plasmids for the IFN-β-promoter luciferase reporter, MyD88, TRAF6, IRF7, and TANK expression are described previously ( ; ).
For flow cytometry analyses, PECs or splenocytes were stained with Ghost Dye Violet 510 reagent (Tonbo Biosciences) according to the manufacturer’s instruction to exclude dead cells, and then treated with Ab cocktail solution containing anti-mouse CD16/CD32 (BioLegend) Ab and stained with indicated Abs. For the analysis of TUNEL-positive cells by flow cytometry, lung single cell suspension was prepared as described previously ( ), and subjected to terminal deoxynucleotidyl transferase dUTP nick end labeling (TUNEL) staining using FragEL DNA Fragmentation detection kit (Calbiochem) according to the manufacturer’s instructions, together with anti-mouse CD45 and anti-mouse CD31 Abs. Data were obtained by using FACSVerse flow cytometers (BD Biosciences) or LSRFotessa X-20 (BD Biosciences). Data were analyzed with FlowJo software (FlowJo, LLC).
Lung of pristane-treated mice were prepared at indicated time and fixed with formalin. The fixed tissues were paraffin embedded and sectioned followed by staining with hematoxylin & eosin (H&E). The development of DAH was evaluated by gross inspection of excised lung and confirmed microscopically. For the analysis of apoptotic cells, tissue sections were antigen retrieved and analyzed by TUNEL staining using FragEL DNA Fragmentation detection kit (Calbiochem) according to the manufacturer’s instructions.
PECs were isolated from the peritoneal cavities of mice 3 d after injection with 2 ml of 4.0% thioglycolate medium (Sigma-Aldrich). BM cells were isolated from femurs and were cultured in RPMI 1640 medium supplemented with 10% FCS, 50 μM 2-ME, and 100 ng/ml Flt3L (BioLegend) for 7 d. Floating cells were collected with gentle agitation and used as BM-pDCs. BM cells cultured in RPMI 1640 medium supplemented with 20% FCS, 50 μM 2-ME, and 10 ng/ml GM-CSF (Peprotech) for 6 d with the replacement of culture media on day 2 and 4 were used as BMDCs. BM cells cultured in RPMI 1640 medium supplemented with 20% FCS, 50 μM 2-ME, and 20 ng/ml M-CSF (BioLegend) for 6 d were used as BM-derived macrophages. Digitonin permeabilization was used to deliver cGAMP into cultured cells as previously described ( ).
RNA from PECs from pristane-treated mice or BMDCs were prepared using TRIzol reagent (Thermo Fisher Scientific) according to manufacturer’s protocol. Then cDNA was generated with the ReverTra Ace (Toyobo). The reverse transcription reaction was subsequently used as a template for real-time PCR. Real-time PCR assays were performed on StepOnePlus (Applied Biosystems) using SYBR Green PCR master mix (Toyobo) according to the manufacturer’s protocol. Data were normalized to Actb . The following primers were used: Actb forward; 5′-ATGCTCCCCGGGCTGTAT-3′, Actb reverse; 5′-CATAGGATCCTTCTGACCCATTC-3′, Ifnb1 forward; 5′-CAGCTCCAAGAAAGGACGAAC-3′, Ifnb1 reverse; 5′-GGCAGTGTAACTCTTCTGCAT-3′, Isg15 forward; 5′-GGTGTCCGTGACTAACTCCAT-3′, Isg15 reverse 5′-TGGAAAGGGTAAGACCGTCCT-3′, Cxcl10 forward; 5′-ATGCTGCCGTCATTTTCTG-3′, Cxcl10 reverse; 5′-ATTCTCACTGGCCCGTCAT-3′, Irf7 forward; 5′-TGCAGTACAGCCACATACTGG-3′, Irf7 reverse; 5′-CTCTAAACACGGTCTTGCTC-3′.
Serum anti-dsDNA Ab levels were determined by ELISA as described previously ( ). Briefly, plates were coated with 5 μg/ml calf thymus dsDNA (Sigma-Aldrich). Sera were added to the plate and further incubated with AP-conjugated anti-mouse IgG Ab after washing. Then the AP substrate (Sigma-Aldrich) was added, and absorbance was measured at 405 nm. Anti-dsDNA concentrations were quantified according to the standard curve. Concentrations of total IgM and IgG1 levels in the sera were determined by ELISA as described previously ( ).
HEK293 cells on 24 well plates were transiently transfected with the 100 ng Ifnb promoter reporter and 20 ng control Renilla luciferase plasmids together with indicated plasmids with or without a 100 ng TANK expression plasmid. The amounts of total transfected DNA were adjusted to 445 ng/ml with a pcDNA3.1(+) empty plasmid (Mock). Cell lysates were prepared 48 h after transfection and the luciferase activity was measured by using the Dual-luciferase reporter assay system (Promega) following the manufacturer’s protocol. The Renilla luciferase reporter plasmid was simultaneously transfected as an internal control.
Cells were lysed in a lysis buffer containing 1% Nonidet P-40, 150 mM NaCl, 20 mM Tris–HCl (pH 7.5), 1 mM EDTA, and a protease inhibitor cocktail (Roche). Lysates were separated by SDS–PAGE and transferred onto polyvinylidene difluoride membranes (Bio-Rad). After membranes were blotted with Abs, proteins on membranes were visualized with Luminata Forte Western HRP Substrate (Millipore). Luminescence data were obtained by ImageQuant LAS 4000 (GE Healthcare). Intensities of p-TBK1 and p-IRF3 bands were quantified by using ImageJ software.
Macrophages treated with dsDNA for 2 h were lysed in M-PER Mammalian Protein Extraction Reagent (Thermo Fisher Scientific) buffer and used for the measurement of cGAMP concentration by using the 2′3′-cGAMP ELISA Kit (Cayman Chemical) according to the manufacturer’s instructions. Protein concentrations in the cell lysates were measured using Pierce BCA Protein Assay Kit (Thermo Fisher Scientific) and was used to normalize cGAMP concentrations.
Macrophages were seeded on cover slops placed on 24 well plates to 1 × 10 5 cells/well, and fixed with 3% paraformaldehyde in PBS for 10 min, incubated with 50 mM NH 4 Cl in PBS for 10 min, permeabilized with 0.5% Triton X-100 for 10 min, and blocked with 2% normal goat serum (Dako) and 0.1% gelatin in PBS. Primary Abs to cGAS (D-9; Santa Cruz), FITC-conjugated mono- and polyubiquitinylated conjugates monoclonal Ab (FK2; Enzo) were used for staining in combination with secondary Ab conjugated to Alexa 568 goat anti-Mouse IgG (H + L) (Invitrogen). Cy3-labeled dsDNA were generated by annealing Cy3-labeled sense and anti-sense ssDNA oligos in an annealing buffer (20 mM Tris–HCl, pH 7.5, 50 mM NaCl) ramping down from 95°C to 25°C at 1°C/min, followed by NaOAc and ethanol precipitation. Annealed dsDNA oligos were resuspended into a desired buffer for experimental use. Images were captured on a TCS SPE confocal microscopes (Leica) and analyzed with the LAS-AF software (Leica). For the quantification of cGAS and dsDNA colocalized-puncta and the quantification of integrated densities of C3 and IgM merged images, images were analyzed using ImageJ (National Institutes of Health) and the Cell Counter plugin. The ratio between puncta in WT and Tank KO BMDMs was plotted using GraphPad Prism 8.
Statistical significance was calculated with the two-tailed t test or log-rank test. P -values of less than 0.05 were considered significant.
Reviewer comments
|
Putting Patients First: Pragmatic Trials in Gynecologic Oncology | 247a4b9d-f00b-4901-ba5c-8e22a24e8aed | 11941110 | Medicine[mh] | The number of patients with gynecologic malignancies, including ovarian, endometrial and cervical cancers, is increasing in Canada, with a combined 13,200 new cases and 4000 deaths estimated in 2024 by Canadian Cancer Statistics . Of note, half of these deaths are due to ovarian cancer . The physical, emotional and economic impacts of a gynecologic cancer diagnosis are substantial. Newer and frequently more toxic combination therapies are increasingly available, with the incorporation of molecularly targeted agents, including immunotherapy. While offering benefits to many patients, these new therapies cause significant morbidity and, in some cases, treatment-related deaths. Furthermore, clinical trials establishing these new standards of care have selective eligibility criteria that do not reflect real-world patient populations. This makes interpretation of the data and risk particularly challenging for older patients (over 65 years), patients with ethnicity other than Caucasian, and patients with comorbidities and poor performance status. In addition, the provision of cancer care is becoming more expensive, and this has important implications in terms of providing equitable access to cancer care across all Canadian jurisdictions. Estimates of the cost of treating gynecologic cancers was ~CAD 897 M in 2024, with direct health system costs projected to increase by 24% over the next decade. These projections do not account for expected advances in diagnosis and treatment. They also do not account for out-of-pocket expenses impacting patients and caregivers directly . Research to discover and optimize treatments that improve survival and enhance quality of life is very important, as is research to improve access and reduce physical and financial toxicity. The Society of Gynecologic Oncology of Canada (GOC) group is a multidisciplinary organization with a mission to improve the care of women with or at risk of gynecologic cancer. The GOC, in partnership with patients and patient advocacy groups, such as Ovarian Cancer Canada (OCC), have recognized the importance of facilitating research that bridges gaps in knowledge between trial-based evidence, real-world clinical practice and healthcare policy. Most importantly, GOC recognizes the need to conduct and encourage research that addresses important questions for patients and clinicians. Given the increasing Canadian interest in performing patient-centred research, with grant availability to support pragmatic clinical trials (PCTs) now becoming more commonplace, the GOC made the decision to formally launch a programme to stimulate pragmatic research. The 2024 GOC Pragmatic Clinical Trials Workshop was planned to bring together stakeholders involved in gynecologic cancers interested in PCT research and provide access to experts in the field. The meeting was held across two days in Toronto in November 2024, with patients, clinicians and scientists from across Canada ( ). Topics of discussion included an overview of pragmatism, methodological considerations for surgery and systemic therapy PCTs, and an overview of trial outcomes that matter as described and defined by patients. There was a break-out session to explore practical ways to address patient-prescribed research priorities. The meeting finished with interactive round table discussions with the specific objective to develop investigator-initiated pragmatic trials in gynecologic oncology surgery and systemic therapy. The goal of the meeting was to establish relationships needed to initiate coordinated and supported PCTs in gynecologic oncology, leveraging the mentorship and resources of REaCT (ReThinking Clinical Trials), IMPACTS (Innovative Multicentre Patient-centered Approach to Clinical Trials in Surgery), and Common Sense Oncology and Ovarian Cancer Canada to improve the quality of gynecologic cancer care nationally.
The GOC convened a national planning committee in March 2024 to introduce pragmatic trials to the specialty. A strategy to select participants, based on a survey and self-stated personal goals for the workshop, was developed and distributed electronically to all GOC members. To have tangible pragmatic trial concepts for discussion with the participants and invited experts, a call for a brief PCT proposal was made to the GOC members. At the same time, OCC sent out a one-page, bilingual overview of pragmatic clinical trials and a request for feedback to the national patient community through email and the online patient platform OVdialogue (Ovarian dialog) (English version: https://ovdialogue.ovariancanada.org/home ) (accessed on 26 February 2025). See .
3.1. Pragmatic Trials Invited speaker Professor Ian Tannock provided a foundation for pragmatism in clinical medicine. Pragmatic trials are designed to efficiently answer everyday, clinical questions using minimal (if any) additional healthcare resources. Pragmatic trial methodology offers broad patient inclusion, streamlined consent procedures and minimal data element collection. Pragmatic trials are also referred to as effectiveness trials, with results that are more likely to be reproducible in the ‘real-world’ patients that we treat everyday. The kinds of questions that are asked in pragmatic trials are not typically addressed through industry-sponsored randomized controlled trials (RCTs) due to the absence of potential for commercial impact and/or expansion of regulatory cancer drug approval. The design and execution of pragmatic trials date back to the 1960s, but for most practicing clinical oncologists, pragmatism is a new concept. Government grants supported 60% of RCTs in the 1970s and 1980s, but since then, clinical trials have been largely taken over by industry . Industry organized and funded ~90% of phase 3 RCTs between 2010 and 2020 . Industry-funded trials are conducted under near-ideal circumstances with inclusion criteria favouring patients with better performance status, fewer comorbidities and no competing risks. They are very costly to run, resource- and time-intense and require numerous extra visits, tests and procedures. These so-called efficacy trials have led to substantial gains for highly selected patients but have also led to expansive approvals for drugs that often lead to minimal, if any, improvements in outcomes in the real-world patient population . Furthermore, many trials of targeted therapies have included a wide group of patients rather than selecting patients according to the presence of an appropriate biomarker; an example is a trial of niraparib for response maintenance in women with ovarian cancer . Approvals of targeted therapies for unselected patients contradict the principle of precision medicine, which aims to match the right treatment to the right patient. In addition, the side effects of investigational drugs are often poorly assessed in industry-sponsored trials, with failure to capture chronic toxicities that lead to important changes in quality of life and are of importance to patients. The widely used CTCAE (Common Terminology Criteria for Adverse Events) criteria were developed to assess acute toxicities of conventional chemotherapy drugs and may not adequately reflect the spectrum of toxicities arising from newer targeted agents. Pragmatic trials are needed to compare variable standards of care in practice across sites and provinces with many oncology drug options and to evaluate the importance of the efficacy–effectiveness gap (EEG). The EEG is the difference between the results obtained in clinical trials that lead to the registration of new treatments and the results obtained in everyday practice . Many examples of the EEG challenge confidence in decisions for drug approvals based on industry-funded RCTs, which do not take into consideration the dichotomy between ‘ideal’ versus ‘real-world’ characteristics of health systems and patients. The EEG means that the benefits are usually smaller and the toxicities are higher in a real-world clinical oncology practice when compared to industry-funded RCTs. 3.2. Common Sense Oncology Dr. Chris Booth from Common Sense Oncology (CSO) gave a presentation on how this organization aligns with and promotes PCTs in oncology. CSO is a grass-roots collective launched in 2023 that includes clinicians, academics, patients, advocates, and other stakeholders from health systems around the world. The core mission of CSO is to ensure that cancer care and innovation are focused on outcomes that matter to patients rather than the commercial bottom line. CSO’s vision is that, irrespective of where someone lives, they have access to cancer treatments that make a real difference in their lives. CSO is working with clinicians, trialists and journals to re-calibrate how RCTs are designed and reported. Dr. Booth emphasized that, under most circumstances, the primary study endpoint should be overall survival (OS) and/or quality of life (QoL). A historical perspective on how Progression-Free Survival (PFS) has largely replaced OS in terms of industry-sponsored trial endpoints was presented . Dr. Booth highlighted that PFS was originally developed as a screening tool to identify signals of activity in early drug development. The limitations of using OS as a trial endpoint include longer follow-up, higher costs and control for the impact of post-progression therapies; as such, there was a drive to use PFS as a regulator-recognized endpoint. Unfortunately, PFS is not a valid surrogate for QoL or OS in most contexts, although there are some rare exceptions to this rule [ , , , ]. Dr. Booth emphasized that another way for clinicians to think about PFS is as ‘time to changes on a CT scan’, which may not relate to time to symptoms nor time to reinstitution of therapy. He also presented the concept of informative censoring, which means that, in practice, at least a proportion of what we see in terms of PFS benefit from randomized trials is a statistical artefact [ , , , ]. It is important to understand the circumstances in which we should use PFS as a trial endpoint and when we should not. As a minimum, if PFS is a primary endpoint, trials should also measure OS and QoL. CSO is working to improve how trials are designed; work streams are also underway to improve oncologist education, communication with patients, and health equity and access to cancer care. Common Sense Oncology is committed to promoting interventions that measurably improve the life of patients, celebrating well-conducted trials and challenging interventions that may cause more harm than good. 3.3. Rethinking Clinical Trials (REaCT) Dr. Mark Clemons spoke about his work as co-founder of REaCT. Formed in 2014, REaCT is the world’s largest cancer pragmatic trial group in Canada. REaCT has activated over 26 oncology trials and enrolled over 5000 patients across four provinces. Dr. Clemons provided an overview of their programme, discussing some of the barriers to carrying out PCTs and some of the innovative strategies REaCT have utilized to overcome these barriers . A key challenge was simplifying the patient consent process in a way that met the Research Ethics Board’s (REB) standards. The Integrated Patient Consent (IPC) model is a verbal consent process that allows the regular ‘circle of care’ staff to explain the trial during usual patient care with minimal reliance on research staff . Verbal consent is documented in clinical notes with a brief information handout provided to the patient. REaCT has actively and successfully engaged with REBs in four provinces, such that a streamlined approval process for PCT protocols has been developed that does not necessarily involve a full board review at REaCT sites. REaCT has shown particular interest in dose optimization PCTs. Historically, the selection of dose for new agents alone or in combination has been based on small phase I dose-escalation studies that identify the maximum tolerated dose of a drug—not necessarily the maximum effective dose of a drug. The limitations of this approach have included the lack of diversity (age, comorbidity, race) of patients traditionally enrolled in phase I trials when establishing the recommended dose for further investigation. As a result, dosing based on the MTD and the recommended phase II dose (the dose level below the MTD) from phase I trials runs the risk of exposing patients to higher and potentially more toxic doses of drug than they require for the biologic effect. Furthermore, the duration of treatment for patients investigated in trials that later become adopted as a standard of care is chosen in an arbitrary manner. Hence, there is a possibility that a lower (optimized) dose or shorter duration of treatment could result in the same level of therapeutic benefit with lower toxicity (and cost). This is particularly relevant in the era of targeted and immune therapies where the mechanism of action differs substantially from traditional cytotoxic agents. Immune checkpoint inhibitors, for example, may fully inhibit their targets at lower than the current recommended doses, and the duration of inhibition may exceed the current standard of care scheduling interval. Anecdotally, patients often tell us that, “the treatment made me feel worse than the cancer”. Understanding, supporting and conducting dose optimization studies are key if we are to personalize oncology care. There are many examples, most recently with the introduction of Lenvatinib combined with Pembrolizumab in advanced endometrial cancer, where despite published trial data and regulator approved dose, practitioners are starting patients at a lower dose of the drug, based on the known toxicity and risks to patients. Investigating in a systematic manner what the optimal dose for all patients actually is, in the context of the disconnect between the science, trial data and clinical experience, is extremely important. It is essential to involve patients and oncologists in the planning of dose optimization trials, to use meaningful endpoints such as OS and QoL and to keep the trials simple [ , , ]. The first part of the REaCT process is to generate PCT ideas from surveys, asking patients and oncologists for their feedback on subjects affecting care across the spectrum from diagnosis to end of life. These surveys identify variations in practice that warrant further investigation with a PCT. Before embarking on a new PCT, the REaCT team ensures that a trial addressing a similar question has not been previously reported by performing rigorous systematic reviews of the literature . Given publication bias towards reporting positive trials, it is necessary to discover any meeting reports or inclusion of data in other formats which describe a negative study outcome, as sometimes practice-changing evidence is not apparent. A key principle highlighted by Dr. Clemons and our other experts was that PCTs should have a limited number of endpoints, tests and interventions should follow the standard of care and data collection should be sufficient to address the question but be kept to a minimum. Unlike efficacy trials, PCTs of effectiveness developed through REaCT tend to become simpler with each iteration. 3.4. Innovative Multicentre Patient-Centered Approach to Clinical Trials in Surgery (IMPACTS) Dr. Paul Karanicolas established IMPACTS, a pragmatic surgery trial programme, in 2017 to address uncertainty and poor outcomes in surgical and perioperative care. IMPACTS has several open randomized surgical oncology protocols within multiple sites in Ontario and Manitoba. He presented the programme overview, sharing several examples of surgical pragmatic trial concepts (CLEAN wound—open, three-arm randomized trial for no wound irrigation, saline wound irrigation or poviodine wound irrigation to prevent surgical site infections) that will enhance perioperative care for all patients. The IMPACTS programme searches for interventions where there is clinical equipoise in terms of benefit to patients. Historically, from the time at which an evidence gap is identified in surgical practice, it can take 10 years to address and resolve using conventional (efficacy) trial designs. IMPACTS is able to design PCTs that address practical surgical questions much faster. IMPACTS builds the infrastructure (using lessons learned from REaCT, including the IPC model) to activate trials within a few months and completes them within a couple of years so that surgeons can continually answer patient-centred, important questions. An innovative design methodology has been developed. Umbrella protocols with all standard elements included and no specific intervention listed are pre-approved by the REB and have been approved through CTO—Clinical Trials Ontario—so that once they are open at one site, they are open provincially. Platform trial and pragmatic trial methodologies (with no fixed sample size or time frame) are common. Funding agencies (i.e., CIHR—Canadian Institutes for Healthcare Research) are starting to understand these methodologies but funding still remains a challenge. IMPACTS trials try to minimize data collection using patient-reported outcomes (PROs) as much as possible, linking with other databases (to facilitate data acquisition), including NSQIP (National Surgical Quality Improvement Program) and ICES (Institute for Clinical Evaluative Science). All IMPACTS trials utilize integrated consent: clinicians provide patients with a one-page infographic, using a picture of what will happen in the trial, rather than a description in words. Patients are at the centre of IMPACTS trials, with potential trials being presented to patient groups for discussion and modification of the question and trial methodology. 3.5. Putting Patients First; Advocacy and Partnership in PCTs Dr. Alicia Tone from Ovarian Cancer Canada (OCC) together with OCC’s Patient Partners in Research (PPiR)—Karen Bemister, Shannon Kadar and Julie Mulligan—presented at the workshop. Results from an OCC survey requesting patient feedback on priorities for pragmatic clinical trials (N = 27 responses) were presented. Open-text responses were classified based on overall theme (one dominant theme per response). The most common theme was a desire for trials focused on ‘patients like me’ (26%); for instance, rare types of ovarian cancer, patients 65 years and older and individualized dosing or patterns of response by heritage/ethnicity. Other common themes included treatment side effects (22%, including impact on QoL and medical menopause), monitoring for recurrence (15%), lifestyle factors (11%), communication (11%) and consideration of comorbidities/holistic approach to care (11%). The PPiR representatives spoke individually to the audience, sharing their stories, perspectives, hopes and expectations from oncology care. The common thread amongst them was a desire for effective care that preserves and optimizes quality of life. To live longer but with toxicity limiting quality was not an expressed goal of care by anyone. Respect for an individualized precision approach to informed patient decision making and individualized and economically responsible drug dosing supported by a common-sense framework were specific and prominent themes. The patient partners emphasized the need for oncologists to simply ask about—rather than assume—what is important to them in terms of QoL. The assessment of side effects of therapy, for example, has been studied extensively, and there are several publications demonstrating that healthcare providers are very poor at estimating the impact and magnitude of treatment side effects on patients [ , , , , , ]. Rotating round table sessions allowed patient partners and advocates to speak individually with experts, with a focus on how to incorporate the patient voice as a central consideration in pragmatic trial design. Some specific examples of these questions, which all had a precision medicine focus, included ‘Do targeted therapies need to be dosed and scheduled in the same way for all stages of cancer or can these be modified?’, ‘Are there differences in effectiveness of therapy according to ethnic background of the patient?’, and ‘Can the activity of a PARPi (Poly—ADP Ribose Polymerase inhibitors) be enhanced by combination with any other drugs such as bevacizumab?’. 3.6. The Future of Pragmatic Trials Dr. Marie-France Savard from REaCT presented future directions for PCT research in Canada with emphasis on knowledge translation and the need to leverage the energy of the entire oncologic community to ensure pragmatic trial results are disseminated widely to help improve clinical impact. She highlighted the need for early engagement during study development with local, national and international oncology organizations (such as the GOC, Choosing Wisely Canada, Optimal Cancer Care Alliance, Canadian Association of Medical Oncology, Health Canada, Federal Drug Administration, REFINE—REduced-Frequency ImmuNE checkpoint inhibition in cancer trial investigators), and patient advocacy organizations (such as OCC) to ensure that study findings are integrated into clinical practice guidelines (CPGs). To date, there are currently no practice guidelines that incorporate results from PCTs; this is a limitation affecting the applicability of CPGs to daily clinical practice. Choosing Wisely Canada could be a vehicle to disseminate PCT results to a receptive audience that is enthusiastic and supportive of practical and patient-centric treatment recommendations. This was suggested as an opportunity for GOC to take a leadership role and be the first professional organization to support the results of PCTs in this way. Dr. Savard has lobbied for funding from public institutions (in the context of publicly funded healthcare) to support dose optimization trials, given their potential cost savings for cancer care programmes across Canada. In the context of standardizing the future development of REaCT trials, a PCT review checklist has recently been implemented by REaCT. The inclusion of various criteria is based on the priority to help guarantee the success of PCTs. In part, this checklist was also necessary in helping to decide which PCT concepts to move forward with, since support for the REaCT trial infrastructure is limited. See . REaCT trials, going forward, are committed to achieving most of the features listed in . Analysis of pragmatic trial results also needs to be pragmatic, matching the overall trial design. Based on this concept, a composite benefit–risk endpoint for patient outcomes was presented. The trial data were then summarized per intervention, and lastly, the interventions were compared . This contrasts with the traditional approach, where efficacy and safety are analyzed separately and then we combine these as separate outcomes of benefit and risk. Benefit that is evaluated in isolation of risk (and vice versa) is not a patient-centric way of presenting or even thinking about therapy. Patients and clinicians want to support therapy that is effective and has the fewest number of adverse events. 3.7. Pragmatic Trial Design Workshop The final part of the GOC Pragmatic Trials Meeting was centred around developing the submitted pragmatic trial concepts. Sixteen proposals were received (six ovarian, four endometrial, two vulvar, two cervical, and two supportive care) spanning pre-invasive cancer, surgery, systemic therapy treatments and molecular monitoring of cancer. All trial concepts were reviewed by the committee and experts; the Pragmatic Explanatory Continuum Indicator (PRECIS) instrument served as a feasibility and quality guide . These were sent to the expert panel in advance, and each of the experts chose a trial concept to develop during the meeting. Oncologists from across Canada were invited to participate, with representation from seven provinces. Round table discussions were held for four projects, including (1) a dose optimization, precision oncology trial proposal for relapsed endometrial cancer patients; (2) a surgery trial in ovarian cancer; (3) a drug and dose optimization trial in platinum-resistant ovarian cancer; and (4) an intervention trial of geriatric assessment in optimizing precision immune-oncology therapy for older patients with metastatic endometrial cancer. A fifth table focused on working through the patient survey results presented earlier in the meeting, with the goal of identifying which ideas could be answered using a pragmatic approach. For some proposals, there was progression to some of the more intricate details relating to the trial interventions (i.e., what drug doses to compare), and in others, larger issues were discussed, such as how to guarantee sufficient enrollment of patients to evaluate the trial’s primary outcome. Each group reported to the larger group at the end of the session with additional suggestions made for these trial concepts. Future steps and a group consensus are summarized in .
Invited speaker Professor Ian Tannock provided a foundation for pragmatism in clinical medicine. Pragmatic trials are designed to efficiently answer everyday, clinical questions using minimal (if any) additional healthcare resources. Pragmatic trial methodology offers broad patient inclusion, streamlined consent procedures and minimal data element collection. Pragmatic trials are also referred to as effectiveness trials, with results that are more likely to be reproducible in the ‘real-world’ patients that we treat everyday. The kinds of questions that are asked in pragmatic trials are not typically addressed through industry-sponsored randomized controlled trials (RCTs) due to the absence of potential for commercial impact and/or expansion of regulatory cancer drug approval. The design and execution of pragmatic trials date back to the 1960s, but for most practicing clinical oncologists, pragmatism is a new concept. Government grants supported 60% of RCTs in the 1970s and 1980s, but since then, clinical trials have been largely taken over by industry . Industry organized and funded ~90% of phase 3 RCTs between 2010 and 2020 . Industry-funded trials are conducted under near-ideal circumstances with inclusion criteria favouring patients with better performance status, fewer comorbidities and no competing risks. They are very costly to run, resource- and time-intense and require numerous extra visits, tests and procedures. These so-called efficacy trials have led to substantial gains for highly selected patients but have also led to expansive approvals for drugs that often lead to minimal, if any, improvements in outcomes in the real-world patient population . Furthermore, many trials of targeted therapies have included a wide group of patients rather than selecting patients according to the presence of an appropriate biomarker; an example is a trial of niraparib for response maintenance in women with ovarian cancer . Approvals of targeted therapies for unselected patients contradict the principle of precision medicine, which aims to match the right treatment to the right patient. In addition, the side effects of investigational drugs are often poorly assessed in industry-sponsored trials, with failure to capture chronic toxicities that lead to important changes in quality of life and are of importance to patients. The widely used CTCAE (Common Terminology Criteria for Adverse Events) criteria were developed to assess acute toxicities of conventional chemotherapy drugs and may not adequately reflect the spectrum of toxicities arising from newer targeted agents. Pragmatic trials are needed to compare variable standards of care in practice across sites and provinces with many oncology drug options and to evaluate the importance of the efficacy–effectiveness gap (EEG). The EEG is the difference between the results obtained in clinical trials that lead to the registration of new treatments and the results obtained in everyday practice . Many examples of the EEG challenge confidence in decisions for drug approvals based on industry-funded RCTs, which do not take into consideration the dichotomy between ‘ideal’ versus ‘real-world’ characteristics of health systems and patients. The EEG means that the benefits are usually smaller and the toxicities are higher in a real-world clinical oncology practice when compared to industry-funded RCTs.
Dr. Chris Booth from Common Sense Oncology (CSO) gave a presentation on how this organization aligns with and promotes PCTs in oncology. CSO is a grass-roots collective launched in 2023 that includes clinicians, academics, patients, advocates, and other stakeholders from health systems around the world. The core mission of CSO is to ensure that cancer care and innovation are focused on outcomes that matter to patients rather than the commercial bottom line. CSO’s vision is that, irrespective of where someone lives, they have access to cancer treatments that make a real difference in their lives. CSO is working with clinicians, trialists and journals to re-calibrate how RCTs are designed and reported. Dr. Booth emphasized that, under most circumstances, the primary study endpoint should be overall survival (OS) and/or quality of life (QoL). A historical perspective on how Progression-Free Survival (PFS) has largely replaced OS in terms of industry-sponsored trial endpoints was presented . Dr. Booth highlighted that PFS was originally developed as a screening tool to identify signals of activity in early drug development. The limitations of using OS as a trial endpoint include longer follow-up, higher costs and control for the impact of post-progression therapies; as such, there was a drive to use PFS as a regulator-recognized endpoint. Unfortunately, PFS is not a valid surrogate for QoL or OS in most contexts, although there are some rare exceptions to this rule [ , , , ]. Dr. Booth emphasized that another way for clinicians to think about PFS is as ‘time to changes on a CT scan’, which may not relate to time to symptoms nor time to reinstitution of therapy. He also presented the concept of informative censoring, which means that, in practice, at least a proportion of what we see in terms of PFS benefit from randomized trials is a statistical artefact [ , , , ]. It is important to understand the circumstances in which we should use PFS as a trial endpoint and when we should not. As a minimum, if PFS is a primary endpoint, trials should also measure OS and QoL. CSO is working to improve how trials are designed; work streams are also underway to improve oncologist education, communication with patients, and health equity and access to cancer care. Common Sense Oncology is committed to promoting interventions that measurably improve the life of patients, celebrating well-conducted trials and challenging interventions that may cause more harm than good.
Dr. Mark Clemons spoke about his work as co-founder of REaCT. Formed in 2014, REaCT is the world’s largest cancer pragmatic trial group in Canada. REaCT has activated over 26 oncology trials and enrolled over 5000 patients across four provinces. Dr. Clemons provided an overview of their programme, discussing some of the barriers to carrying out PCTs and some of the innovative strategies REaCT have utilized to overcome these barriers . A key challenge was simplifying the patient consent process in a way that met the Research Ethics Board’s (REB) standards. The Integrated Patient Consent (IPC) model is a verbal consent process that allows the regular ‘circle of care’ staff to explain the trial during usual patient care with minimal reliance on research staff . Verbal consent is documented in clinical notes with a brief information handout provided to the patient. REaCT has actively and successfully engaged with REBs in four provinces, such that a streamlined approval process for PCT protocols has been developed that does not necessarily involve a full board review at REaCT sites. REaCT has shown particular interest in dose optimization PCTs. Historically, the selection of dose for new agents alone or in combination has been based on small phase I dose-escalation studies that identify the maximum tolerated dose of a drug—not necessarily the maximum effective dose of a drug. The limitations of this approach have included the lack of diversity (age, comorbidity, race) of patients traditionally enrolled in phase I trials when establishing the recommended dose for further investigation. As a result, dosing based on the MTD and the recommended phase II dose (the dose level below the MTD) from phase I trials runs the risk of exposing patients to higher and potentially more toxic doses of drug than they require for the biologic effect. Furthermore, the duration of treatment for patients investigated in trials that later become adopted as a standard of care is chosen in an arbitrary manner. Hence, there is a possibility that a lower (optimized) dose or shorter duration of treatment could result in the same level of therapeutic benefit with lower toxicity (and cost). This is particularly relevant in the era of targeted and immune therapies where the mechanism of action differs substantially from traditional cytotoxic agents. Immune checkpoint inhibitors, for example, may fully inhibit their targets at lower than the current recommended doses, and the duration of inhibition may exceed the current standard of care scheduling interval. Anecdotally, patients often tell us that, “the treatment made me feel worse than the cancer”. Understanding, supporting and conducting dose optimization studies are key if we are to personalize oncology care. There are many examples, most recently with the introduction of Lenvatinib combined with Pembrolizumab in advanced endometrial cancer, where despite published trial data and regulator approved dose, practitioners are starting patients at a lower dose of the drug, based on the known toxicity and risks to patients. Investigating in a systematic manner what the optimal dose for all patients actually is, in the context of the disconnect between the science, trial data and clinical experience, is extremely important. It is essential to involve patients and oncologists in the planning of dose optimization trials, to use meaningful endpoints such as OS and QoL and to keep the trials simple [ , , ]. The first part of the REaCT process is to generate PCT ideas from surveys, asking patients and oncologists for their feedback on subjects affecting care across the spectrum from diagnosis to end of life. These surveys identify variations in practice that warrant further investigation with a PCT. Before embarking on a new PCT, the REaCT team ensures that a trial addressing a similar question has not been previously reported by performing rigorous systematic reviews of the literature . Given publication bias towards reporting positive trials, it is necessary to discover any meeting reports or inclusion of data in other formats which describe a negative study outcome, as sometimes practice-changing evidence is not apparent. A key principle highlighted by Dr. Clemons and our other experts was that PCTs should have a limited number of endpoints, tests and interventions should follow the standard of care and data collection should be sufficient to address the question but be kept to a minimum. Unlike efficacy trials, PCTs of effectiveness developed through REaCT tend to become simpler with each iteration.
Dr. Paul Karanicolas established IMPACTS, a pragmatic surgery trial programme, in 2017 to address uncertainty and poor outcomes in surgical and perioperative care. IMPACTS has several open randomized surgical oncology protocols within multiple sites in Ontario and Manitoba. He presented the programme overview, sharing several examples of surgical pragmatic trial concepts (CLEAN wound—open, three-arm randomized trial for no wound irrigation, saline wound irrigation or poviodine wound irrigation to prevent surgical site infections) that will enhance perioperative care for all patients. The IMPACTS programme searches for interventions where there is clinical equipoise in terms of benefit to patients. Historically, from the time at which an evidence gap is identified in surgical practice, it can take 10 years to address and resolve using conventional (efficacy) trial designs. IMPACTS is able to design PCTs that address practical surgical questions much faster. IMPACTS builds the infrastructure (using lessons learned from REaCT, including the IPC model) to activate trials within a few months and completes them within a couple of years so that surgeons can continually answer patient-centred, important questions. An innovative design methodology has been developed. Umbrella protocols with all standard elements included and no specific intervention listed are pre-approved by the REB and have been approved through CTO—Clinical Trials Ontario—so that once they are open at one site, they are open provincially. Platform trial and pragmatic trial methodologies (with no fixed sample size or time frame) are common. Funding agencies (i.e., CIHR—Canadian Institutes for Healthcare Research) are starting to understand these methodologies but funding still remains a challenge. IMPACTS trials try to minimize data collection using patient-reported outcomes (PROs) as much as possible, linking with other databases (to facilitate data acquisition), including NSQIP (National Surgical Quality Improvement Program) and ICES (Institute for Clinical Evaluative Science). All IMPACTS trials utilize integrated consent: clinicians provide patients with a one-page infographic, using a picture of what will happen in the trial, rather than a description in words. Patients are at the centre of IMPACTS trials, with potential trials being presented to patient groups for discussion and modification of the question and trial methodology.
Dr. Alicia Tone from Ovarian Cancer Canada (OCC) together with OCC’s Patient Partners in Research (PPiR)—Karen Bemister, Shannon Kadar and Julie Mulligan—presented at the workshop. Results from an OCC survey requesting patient feedback on priorities for pragmatic clinical trials (N = 27 responses) were presented. Open-text responses were classified based on overall theme (one dominant theme per response). The most common theme was a desire for trials focused on ‘patients like me’ (26%); for instance, rare types of ovarian cancer, patients 65 years and older and individualized dosing or patterns of response by heritage/ethnicity. Other common themes included treatment side effects (22%, including impact on QoL and medical menopause), monitoring for recurrence (15%), lifestyle factors (11%), communication (11%) and consideration of comorbidities/holistic approach to care (11%). The PPiR representatives spoke individually to the audience, sharing their stories, perspectives, hopes and expectations from oncology care. The common thread amongst them was a desire for effective care that preserves and optimizes quality of life. To live longer but with toxicity limiting quality was not an expressed goal of care by anyone. Respect for an individualized precision approach to informed patient decision making and individualized and economically responsible drug dosing supported by a common-sense framework were specific and prominent themes. The patient partners emphasized the need for oncologists to simply ask about—rather than assume—what is important to them in terms of QoL. The assessment of side effects of therapy, for example, has been studied extensively, and there are several publications demonstrating that healthcare providers are very poor at estimating the impact and magnitude of treatment side effects on patients [ , , , , , ]. Rotating round table sessions allowed patient partners and advocates to speak individually with experts, with a focus on how to incorporate the patient voice as a central consideration in pragmatic trial design. Some specific examples of these questions, which all had a precision medicine focus, included ‘Do targeted therapies need to be dosed and scheduled in the same way for all stages of cancer or can these be modified?’, ‘Are there differences in effectiveness of therapy according to ethnic background of the patient?’, and ‘Can the activity of a PARPi (Poly—ADP Ribose Polymerase inhibitors) be enhanced by combination with any other drugs such as bevacizumab?’.
Dr. Marie-France Savard from REaCT presented future directions for PCT research in Canada with emphasis on knowledge translation and the need to leverage the energy of the entire oncologic community to ensure pragmatic trial results are disseminated widely to help improve clinical impact. She highlighted the need for early engagement during study development with local, national and international oncology organizations (such as the GOC, Choosing Wisely Canada, Optimal Cancer Care Alliance, Canadian Association of Medical Oncology, Health Canada, Federal Drug Administration, REFINE—REduced-Frequency ImmuNE checkpoint inhibition in cancer trial investigators), and patient advocacy organizations (such as OCC) to ensure that study findings are integrated into clinical practice guidelines (CPGs). To date, there are currently no practice guidelines that incorporate results from PCTs; this is a limitation affecting the applicability of CPGs to daily clinical practice. Choosing Wisely Canada could be a vehicle to disseminate PCT results to a receptive audience that is enthusiastic and supportive of practical and patient-centric treatment recommendations. This was suggested as an opportunity for GOC to take a leadership role and be the first professional organization to support the results of PCTs in this way. Dr. Savard has lobbied for funding from public institutions (in the context of publicly funded healthcare) to support dose optimization trials, given their potential cost savings for cancer care programmes across Canada. In the context of standardizing the future development of REaCT trials, a PCT review checklist has recently been implemented by REaCT. The inclusion of various criteria is based on the priority to help guarantee the success of PCTs. In part, this checklist was also necessary in helping to decide which PCT concepts to move forward with, since support for the REaCT trial infrastructure is limited. See . REaCT trials, going forward, are committed to achieving most of the features listed in . Analysis of pragmatic trial results also needs to be pragmatic, matching the overall trial design. Based on this concept, a composite benefit–risk endpoint for patient outcomes was presented. The trial data were then summarized per intervention, and lastly, the interventions were compared . This contrasts with the traditional approach, where efficacy and safety are analyzed separately and then we combine these as separate outcomes of benefit and risk. Benefit that is evaluated in isolation of risk (and vice versa) is not a patient-centric way of presenting or even thinking about therapy. Patients and clinicians want to support therapy that is effective and has the fewest number of adverse events.
The final part of the GOC Pragmatic Trials Meeting was centred around developing the submitted pragmatic trial concepts. Sixteen proposals were received (six ovarian, four endometrial, two vulvar, two cervical, and two supportive care) spanning pre-invasive cancer, surgery, systemic therapy treatments and molecular monitoring of cancer. All trial concepts were reviewed by the committee and experts; the Pragmatic Explanatory Continuum Indicator (PRECIS) instrument served as a feasibility and quality guide . These were sent to the expert panel in advance, and each of the experts chose a trial concept to develop during the meeting. Oncologists from across Canada were invited to participate, with representation from seven provinces. Round table discussions were held for four projects, including (1) a dose optimization, precision oncology trial proposal for relapsed endometrial cancer patients; (2) a surgery trial in ovarian cancer; (3) a drug and dose optimization trial in platinum-resistant ovarian cancer; and (4) an intervention trial of geriatric assessment in optimizing precision immune-oncology therapy for older patients with metastatic endometrial cancer. A fifth table focused on working through the patient survey results presented earlier in the meeting, with the goal of identifying which ideas could be answered using a pragmatic approach. For some proposals, there was progression to some of the more intricate details relating to the trial interventions (i.e., what drug doses to compare), and in others, larger issues were discussed, such as how to guarantee sufficient enrollment of patients to evaluate the trial’s primary outcome. Each group reported to the larger group at the end of the session with additional suggestions made for these trial concepts. Future steps and a group consensus are summarized in .
A follow-up meeting to debrief was held two weeks after the symposium with the REaCT team to explore collaborative opportunities and elaborate on some of the identified future steps. Operational models based on individual participant trial centres serving as the sponsor but utilizing REaCT expertise were discussed. The GOC is moving forward with plans to develop PCTs and raise their profile further for patients with gynecologic cancers. Shortly following the meeting—and in response to calls for more inclusive and equitable clinical trials from the patient community—Ovarian Cancer Canada launched an open competition focused on pragmatic trial protocols. BioCanRx (Canada’s Immunotherapy Network) joined as a 50:50 partner following the great success of the meeting, bringing the total funds available to Canadian researchers to CAD 800,000. It is anticipated and hoped that discussions and lessons from the meeting will result in several applications being submitted for consideration of funding. We are also witnessing a resurgence of interest in pragmatic trials among oncologists globally. The Gynecologic Cancer InterGroup (GCIG) have scheduled a Brainstorming Meeting for 2025. In January 2025, the NRG (name derived from the three parental groups—National Surgical Adjuvant Breast and Bowel Project, the Radiation Therapy Oncology Group and the Gynecologic Oncology Group) conducted its first pragmatic trial symposium.
As a national society, the GOC has the opportunity to develop PCTs by providing structure, mentorship and a national framework to help activate pragmatic trial protocols. The GOC can provide a forum and venue for the translation and presentation of PCT results. Collaboration with REaCT, IMPACTS and Common Sense Oncology will ensure that we can be successful in scaling-up and reaching across disease sites in a coordinated strategy to make PCT results available and applicable to the global cancer patient community. Most importantly, partnership with patients and patient advocacy groups, including Ovarian Cancer Canada, will be essential to design relevant studies that our patients want and need. Finally, the GOC, together with patients and our partner organizations, can advocate for funding and the incorporation of PCT outcomes with CPGs to facilitate practice change at provincial and national levels. We have the knowledge, interest and support we need to improve cancer care in real time with engaged clinicians working directly with their patients. It is up to us to get it done and the time is now.
|
Push and Pull: What Factors Attracted Applicants to Emergency Medicine and What Factors Pushed Them Away Following the 2023 Match | 598e3b83-59d8-4111-9298-971b462c9683 | 11931693 | Medicine[mh] | Emergency medicine (EM) has historically enjoyed a very competitive outcome in the National Residency Matching Program (NRMP, or “the Match”) with >95% of programs filling their spots. Beginning in 2022, however, a dramatic decline occurred leaving many programs unfilled. This decline continued in 2023, with 46% of EM programs remaining unfilled. Although 79.1% of those programs filled in the Supplemental Offer and Acceptance Program (SOAP), this represents a tremendous change from previous years. The cause of this change is likely multifactorial, with major contributing factors being the expansion of the number of residency positions, student perceptions of the future job market within EM, and the virtual interview format. , Other proposed etiologies of the decline include the corporate practice of EM (which occurs when a non-physician or corporation exerts control over the medical decision-making or collects reimbursement for the medical services of physicians), the expanded use of advanced practice practitioners (APP) such as physician assistants and nurse practitioners in the emergency department (ED), and increased burnout following a global pandemic. Concerns regarding the job market and expanded use of APPs are likely related to the 2021 EM workforce report by Marco et al, which proposed a range of potential outlooks based on various factors with the most publicized result being a projected oversupply of emergency physicians by 2030. Several factors affected which programs were more likely to go unfilled in the Match. Gettel et al found that programs accredited within the previous five years, as well as programs that were under for-profit ownership were more likely to go unfilled. Another study found that predictors of not filling were having unfilled positions in the previous Match, a smaller program size, location in the Mid-Atlantic or East North Central area, prior American Osteopathic Association accreditation, and corporate ownership structure. Overall, programs felt their match outcomes were worse than in previous years, but they perceived the quality of applicants as similar to previous years. Many factors influence a student’s decision on which specialty to pursue including role models, financial incentives, gender, degree of patient contact, procedural skills, prestige, and lifestyle. – The factors most associated with a choice to specialize in EM include lifestyle, diversity of patient presentations, flexibility in choosing a practice location, work-life balance, and perceived job satisfaction. – Factors associated with earlier selection of EM include early exposure to the field, presence of an EM residency program at a student’s medical school, prior employment in the ED, previous experience as a prehospital practitioner, and completion of a third-year EM clerkship. In this study we surveyed EM applicants from 2022 and 2023 to identify factors deterring or attracting them to the specialty as well as modifiable influences impacting their career decisions. To restore the competitive nature of EM in the Match, it is important to know what motivates medical students to select EM as a specialty in the current environment. It is additionally important to further understand the factors contributing to decreased interest in EM, so that we can continue to address these as a specialty.
The project was conceived by the Council of Residency Directors in Emergency Medicine (CORD) Match Task Force, which includes representatives from the American Academy of Emergency Medicine (AAEM), American Academy of Emergency Medicine Resident and Student Association (AAEM/RSA), American College of Emergency Physicians (ACEP), American College of Osteopathic Emergency Physicians (ACOEP), ACOEP Resident and Student Organization (ACOEP RSO), Association of Academic Chairs in Emergency Medicine (AACEM), CORD, Emergency Medicine Residents’ Association (EMRA), the Society for Academic Emergency Medicine (SAEM), and SAEM Residents and Medical Students (SAEM RAMS). Task force members collaborated to design the survey instrument. The conclusions in this paper represent the views and opinions of the individual authors and do not represent the views of the organizations. The study was approved by the Loma Linda University Health Institutional Review Board. We performed a literature review using PubMed to collect studies investigating factors impacting residency applicants’ specialty choice. Questions were adapted from prior published studies. , Current factors not previously investigated, such as COVID-19 or EM workforce projections, were added following an iterative process of consensus development within the research group. The survey was reviewed by the CORD Match Task Force members and edited. The survey was then pilot-tested by current medical students and residents. We analyzed the responses, and the survey was revised for clarity and brevity following the beta respondents’ feedback. Medical students were asked multiple-choice questions regarding their residency application strategy including whether they had applied to more than one specialty and, if so, which specialties they applied to. The survey participants were asked to rank specialty characteristics influencing their choice of EM as a career on a five-point Likert scale from strongly positive to strongly negative. They were also asked to rank the impact of prior experiences on their specialty choice on a five-point Likert scale from very positive to very negative. We investigated the impact of career advisement using multiple-choice questions with the option to select up to three responses. Finally, free-text response questions were asked to assess applicants’ opinions about the causative factors leading to the 2023 EM Match results. Comment in this space was optional and not meant to reach saturation of themes; rather, it was meant to provide participants the opportunity to give additional details about their experiences. We used a convenience sample of EM-bound medical students who applied in both the 2022 and 2023 Match and those who considered or are considering applying to EM in upcoming Match cycles. Survey respondents were sent a web-based survey via Qualtrics (Qualtrics International, Inc, Seattle, WA) in the summer of 2023. Reminder messages were distributed monthly during the data collection period. The survey was distributed through the listservs of current medical students interested in EM as identified by their membership in an EM national organization including AAEM/RSA, ACOEP RSO, EMRA, and SAEM RAMS. Surveys were also distributed through the SAEM Clerkship Directors in Emergency Medicine (CDEM) listserv to be sent to their recently matched applicants who matched into EM or had considered but ultimately decided not to pursue EM. Convenience sampling via listserv distribution did not allow for survey distribution quantification or response-rate calculation. Comparing the number of survey responses (213) to the number of applicants to EM in the 2023 Match (2,765) shows our survey responses were equal to 7.7% of the total number of EM applicants in 2023. The intended survey participants included medical students who 1) considered but ultimately did not apply to EM residency; 2) applied to EM as their only specialty choice; 3) dual applied to EM and an alternate specialty choice; or 4) entered EM through the SOAP. A financial incentive of a $10 electronic gift card was given to the first 160 participants. Financial support for the study was provided by AAEM, AAEM/RSA, ACEP, ACOEP, AACEM, CORD, and SAEM. We analyzed data using Microsoft Excel 365 (Microsoft Corporation, Redmond, WA) to calculate means and percentages. We calculated 95% confidence intervals (CI) using an online tool. A phenomenological approach to qualitative analysis was used and free-text responses were coded by two authors with experience in qualitative analysis (JM, BM) after establishing a codebook through an iterative process to generate an understanding of the phenomenon of the EM match process in concert with the quantitative questions. Any disagreements between codes were resolved by a third author (MK).
We received responses from 213 individuals. Demographics are shown in . Most respondents (92.8%) had applied to residency already. Of those, 87.2% applied to EM in the Match. Respondents secured an EM residency position in the 2023 Match (69.5%), 2022 Match (9.6%), 2023 SOAP (12.3%), 2022 SOAP (0.5%), and by other means (5.3%). A small proportion of respondents (2.7%) were not entering EM residency. In comparison to applicants securing a position in the 2023 Match, our sample was fairly similar with regard to gender breakdown (57.2% male, 39.9% female in our sample vs 54.8% male, 45.2% female in the Match) but oversampled osteopathic seniors (42.7% in our study vs 24.3% in the Match). Regarding application strategy, 70.1% applied to only EM residencies. Some individuals applied to more than one specialty with EM preferred (12.3%). The most common secondary specialties were internal medicine and family medicine. Applying to EM as the secondary specialty occurred in 2.1% of individuals with primary specialties being anesthesiology, interventional radiology, orthopedic surgery, and physical medicine and rehabilitation. Respondents who chose not to apply to EM at all made up 13.4% of responses. This group of individuals most commonly chose to apply to anesthesiology (39.1%), orthopedic surgery (17.4%), general surgery (17.4%), family medicine (13.0%), internal medicine, pathology, and preliminary year (each 8.7%). (Response option was “Select all that apply,” response sum >100%). Applicants most commonly chose to apply to EM in the third year of medical school (33.5%) or before medical school (33.0%). The remaining responses were evenly split among the pre-clinical years of medical school (6.8%), the fourth year of medical school (8.9%), after medical school (6.8%), and during SOAP (8.4%). Participants were exposed to EM in their medical school via required EM clerkships in the fourth year (42.1%), required clerkships in the third year (24.0%), EM electives in the fourth year (17.0%), and EM electives in the third year (11.1%). shows the degree of influence each factor held in the applicants’ choice of EM as a career. The most frequently cited positive influences were EM residents on shift (4.42 on a 1–5 scale), EM attendings on shift (4.29), the fourth-year EM clerkship (4.62), and third-year EM clerkship/elective (4.53). Prior experience in the ED in a non-physician role (4.43), in emergency medical services (EMS) (4.52) or as a scribe (4.55), were identified less frequently but as very positive factors. Job concerns/workforce report (65.8%), burnout (56.7%), increased use of advanced practice practitioners (APP) (50.8%), and corporate influence in EM (42.5%) were the most-cited reasons for advising applicants away from EM. Emergency department crowding (12.5%) and EM experience during the COVID-19 pandemic (5.8%) were less commonly cited concerns. Participants were asked about advisement and its influence on their specialty choice: 68.5% reported being advised against choosing EM residency training. The most common sources of advisement away from EM were attendings/residents in non-EM specialties (73.3%), peers (50.0%), social media/message boards (47.5%), and EM attendings (37.5%). Medical school representatives in the Dean’s office accounted for a small proportion of advisement away from EM (15.8%). Most participants in our survey (81.8%) reported that advising against entering EM did not change their application strategy. Of those who initially pursued a different specialty 5.7% ultimately entered EM in the SOAP, 5.0% applied to another specialty as a backup to EM, and 3.3% applied to EM as a backup specialty. Of those applicants who did not change application strategy despite negative advice about EM, the most commonly cited reasons were perceived fit with EM (73.7%), flexible lifestyle of EM (64.6%), lack of interest in other specialties (49.5%), and doubt in accuracy of workforce report (49.5%). Very few participants said they would not advise a friend to apply to EM for the 2024 Match (2.3%). Most (75%) would advise a friend to choose EM. Most of those who indicated they would advise a friend against applying to EM would do so because of concerns about fit for the specialty (42.9%) and the job market (22.9%), with corporatization of medicine, APP expansion, and burnout also mentioned. Most somewhat agreed or strongly agreed that their peers would be more interested in EM as a career if they were exposed to EM during a rotation in the third year or earlier (82.7%). Participants were asked what they thought would make EM more appealing to peers who were undecided about a specialty but were considering EM. The most common responses included early exposure to EM (31.5%) and alleviating concerns about job security raised by the EM workforce report (30.2%). Other suggestions included addressing the expanded use of APPs in the ED (10.1%), improving the perception of EM among medical students and physicians (9.4%), and improving work-life balance and compensation (8.7% and 8.1%, respectively). shows how applicants ranked different factors when choosing EM as a career. The most important positive factors were variety of patient pathology (4.66 on a 1–5 scale), lifestyle/flexibility (4.63), high-acuity patient care (4.43), length of residency training (4.37), and family considerations (4.36). Participants were asked specifically if they believed that EM is a “lifestyle specialty,” and 60.1% responded yes; 9.0% did not consider EM a lifestyle specialty, while 28.1% were neutral, and 2.8% were unsure. The factors negatively influencing a career choice in EM, defined as 95% CI less than 3.0, were corporate influence in EM (2.51, 2.33–2.69), ED crowding (2.52, 2.37–2.67), burnout (2.59, 2.44–2.74), and use of APPs in EM (2.63, 2.47–2.79). Average rating of concerns about EM experience during the COVID-19 pandemic (2.95) and workforce report/job security was negative (2.85); however, upper limit of 95% CI was positive, 3.12 and 3.03, respectively. Applicants were asked to identify the most important reason contributing to a larger-than-normal number of unfilled positions in the EM Match. They identified concerns about job security and the future EM workforce as the primary concern ( ). Qualitative responses to the increase in unfilled spots in the EM Match predominantly reflected concerns regarding the EM workforce report and job security. Themes and representative quotations are included in .
Applicants in our survey were drawn to EM by clinical experiences in the ED during the third and fourth year and by interactions with ED residents and attending physicians during those experiences. Unfortunately, only a small proportion of applicants in our survey had required EM clinical experience during the third year of training. Developing best practice recommendations for early exposure to EM during medical school may be an area to target to increase interest in future applicants. Additionally, employment in an EM-related field (ie, EMS, scribe) prior to medical school was also a positive experience. Early identification of those students with prior EM-related employment may be an area for mentorship efforts by EM advisors. Applicants continue to be drawn to the high-acuity patient care, diverse patient pathology, and the flexible lifestyle EM offers. These findings are in line with prior studies of EM applicant attitudes and the cornerstone of EM’s appeal. – , Additional factors that appeal to applicants are the variety of fellowship options available after EM residency, the length of residency training, compensation, and availability of jobs in their desired location. Family considerations are important to applicants and, coupled with the desire for a flexible lifestyle, signal a desire for work-life balance. Shift work in the ED has downsides such as sleep transitions associated with night shifts and working weekends and holidays. However, applicants were signaling those issues are still favorable to being on call or working in a clinic five days a week. Highlighting the factors that resonate with applicants is a good starting point when promoting the specialty. With regard to factors pushing applicants away from EM, most applicants experienced badmouthing of EM and advising away from the specialty. In prior studies, over three-quarters of respondents reported experience with badmouthing of another specialty and one-quarter changed their specialty choice because of it. – When uncertain applicants are narrowing their specialty choices between a few serious options, contending with negativity about your career choice, both now and in the future, from friends or mentors in other specialties may be enough to sway someone away from EM. The most common source of advice against EM in 2023 was not from peers, formal mentors, or Dean’s offices but from attendings and residents in non-EM specialties. Experiencing negative advisement from a trusted mentor about one’s desired specialty is likely impactful. In addition, applicants reported receiving negative pressure from their peers and social media. Most people involved in EM medical education suspected applicants were being advised away from EM. This was suggested by our data. Most assumed advisors from the Dean’s office were advising students away from EM toward more prestigious specialties or those with safer match rates. But that was not the case in our survey, as advisors in the Dean’s office ranked as the sixth most frequent source of advisement away from EM. Additional factors pushing applicants away from EM were corporate influence in EM, ED crowding, burnout, the use of APPs in EM, the experience of emergency physicians during COVID-19, and concerns regarding job security stemming from the 2021 EM workforce report. Applicants are wary of entering a specialty dominated by corporations that place profits over patient care. Residencies at for-profit clinical sites had 1.3 times greater risk of not filling in 2023. Applicants are showing an aversion to training at these sites. However, spots continue to fill during the time-limited SOAP as unmatched applicants are likely excited about the ability to secure any training position. Further understanding applicant concerns and the experiences of residents in for-profit programs is important and requires additional study. Likewise, understanding the experience of EM residents who enter training via the SOAP is valuable for future investigation. Emergency department crowding not only negatively impacts quality of patient care; it also deters future emergency physicians from entering the field. Students on ED rotations see the challenges of finding space to re-evaluate patients, delays in workup, and prolonged care of patients boarding in the ED who are awaiting inpatient beds. Efforts to address boarding as well as the implementation of surge capacity plans may result in improving this factor as students consider specialty choice. Furthermore, burnout generated the largest number of moderate or strongly negative responses. Emergency medicine is widely cited as the specialty with the highest rates of burnout. , Requirements to promote well-being and counter burnout exist in both undergraduate (Liaison Committee on Medical Education standard 12.3) and graduate medical education (Accreditation Council for Graduate Medical Education Common Program Requirements for residency VI.C). Prior qualitative research suggests faculty modeling may influence residents’ career perspectives, indicating targeting faculty for education on well-being and burnout may yield substantial benefits for both current and prospective residents. Applicants, additionally, have concerns about the use of APPs in the ED. Many free-text responses cited “scope creep” of APPs as well as the negative impact on physician job availability as negative factors. Applicants signaled that they are paying attention to the topic of APP usage in the ED and it is an important issue to them. National leaders in EM are actively working to protect the scope of all practitioners in the ED and continue to emphasize the importance of physician-led patient care teams. Further dissemination of these advocacy efforts and the effects on our specialty would be beneficial for applicants. Lastly, the workforce report has been frequently hypothesized as a major contributing factor to the rapid decline in EM residency applications over the last two years. Applicants to EM in our survey confirmed this hypothesis, citing projections stemming from the report as the most important factor leading to the significant rise in unfilled EM residency positions in the 2022 and 2023 Matches. Subsequent studies have addressed workforce considerations such as physician attrition and geographic distribution. , Further investigation and clarity into the future EM workforce would aid applicants as they weigh their career decisions. Reinforcing the positive aspects of EM while addressing the negative factors above will go a long way toward bolstering the EM applicant pool and future workforce. The 2023 EM Match was unprecedented with 554 unmatched positions. However, EM still matched 2,456 applicants, the fourth largest number in the 2023 Match. Our survey yields insights into the positive aspects of EM that draw applicants to the specialty and identifies negative factors following the 2023 EM Match.
Our survey may be impacted by selection bias as our distribution method did not guarantee that every residency applicant who considered applying to EM residency was included. For this reason, survey response rate was not calculated, and it is unknown to what extent our results are representative of all EM residency applicants in the 2022 and 2023 Match cycles. Additionally, recall bias may also contribute as responses from applicants who matched to EM in 2022 were included. As potential survey participants were identified through their membership in national EM resident and student organizations, this study may not be representative of individuals who considered EM early in their medical school career and ultimately did not pursue EM. The exact number of individuals who received the survey solicitation is not known, making it impossible to calculate a response rate. Our survey responses represent 7.7% of the total number of applicants to EM in 2023, although it is unlikely the survey reached all applicants in the pool. Future studies may benefit from a longitudinal approach soliciting EM interest-group participants in the first two years of medical school and following them through their respective Match years to improve response rate.
The specialty of emergency medicine experienced a sharp increase in unfilled positions in the 2022 and 2023 matches. Most applicants received advisement away from EM with the most common source being physicians in non-EM specialties. Applicants perceive corporate influence in EM, ED crowding, burnout, influence of advanced practice practitioners in EM, and workforce concerns as driving forces behind the EM Match results. Applicants cited clinical experiences in the ED and interactions with EM attendings and residents as positive factors. High-acuity patient care, diverse patient pathology, and flexible lifestyle were seen as positive characteristics of a career in EM.
|
Effect of berry maturity stages on the germination and protein constituents of African nightshade ( | 8fee3c25-8113-43f4-b46f-f19581f2dd69 | 11649806 | Biochemistry[mh] | African nightshades are indigenous leafy vegetables that are commonly consumed in West and East Africa. Several species are known under African nightshades and occur in the Solanum genus within the Solanaceae family . The most common species include Solanum scabrum , S. villosum , S. nigrum , and S. americanum . Like many traditional vegetables, African nightshades play a significant role in nutrition, income generation and food security. Nightshade vegetables are a source of essential nutrients, as they are rich in calcium, iron, and vitamins A and C , . Similar results using the accession Olevolosi revealed that African nightshades are rich in health promoting phytonutrients and mineral elements . Some of the mineral contents reported for Olevolosi of African nightshade harvested at 120 days after planting (dap) were phosphorus 3.2 g kg − 1 , potassium 40.5 g kg − 1 , calcium 21.6 g kg − 1 , magnesium 3.7 g kg − 1 , iron 0.7 g kg − 1 , zinc 62.1 g kg − 1 , manganese 287.5 g kg − 1 and nitrogen 58.6 g kg − 1 [ . African nightshades are propagated through seeds. The genetic exchange between different individuals within the same species occurs by open pollination. After successful pollination, flowers develop into fruits after about 9–10 days. The berries then develop to maturity within 3–4 weeks. Within two weeks of reaching physiological maturity, the berries change colour, ripen and soften. However, the production of this vegetable has been largely constrained by limited access to high-quality seed stock . Despite the growing popularity and demand for these vegetables, the production has not reached significant proportions, thus, farmers can hardly meet the growing demand. It has been previously shown that production stands at around 1.5 tonnes per hectare (t ha − 1 ) against a potential yield of 2.5 t ha − 1 . The use of high-quality seeds and good crop management practices will help to achieve optimal yields. However, the poor seed quality (laboratory germination percentage of around 50% and low field germination of only up to 5.3%) is majorly attributed to seed harvesting and processing techniques . Factors affecting seed quality include the maturity stage at which the seed is harvested, methods of seed extraction, and storage conditions of the seeds . Seed maturity is a significant factor determining seed quality and is essential for germination and successful seedling emergence. For instance, in tomato it was shown that quality tomato seeds could be obtained from half-ripe as well as fully-ripe berry stages of different accessions based on colour change of the berries, leading to a high germination percentage and seedling emergence . However, the seeds produced at the initial stage recorded germination percentages below the recommended, thus indicating lower seed quality. As a result of the accumulation of essential elements, storage reserves and nutrients in the seed, the constitution of seeds extracted from fruits at different stages of maturity are likely to vary, thereby affecting the seed quality . For example, research on coffee seed development showed that seeds of Coffea arabica harvested at two different developmental stages presented a difference in their water-soluble proteins. These differences were shown to directly influence the germination of the coffee seeds hence their quality . In tomatoes, an increase in germination rate and germination percentage was recorded as fruits developed from the breaker stage to red ripe stage . On the other hand, a maximum germination percentage was obtained in seeds extracted from fruits of tomatoes at the breaker stage, while it decreased as the fruits continued to ripen. However, fruits at the red ripe stage germinated more rapidly, and further delayed harvesting led to a reduction in germination percentage . Therefore, African nightshade seeds harvested at different maturity stages may have varying quality. Nevertheless, profound knowledge about the optimal maturity stage of African nightshade is lacking. Proteomics is a powerful tool for monitoring the physiology of cells and tissues under specific developmental conditions. Different studies have already analysed seeds of multiple species from important crop plants through proteomics, usually under diverse environmental conditions . Results by Li et al. (2012) revealed variations in abundance of protein levels at different stages of Brassica campestris L. seed development. A proteomic analysis of the change in the amount of stress-related proteins during seed development showed that some LEA proteins accumulated at physiological maturity and remained at high levels in mature seeds of Oryza sativa . This study aimed to determine the germination of African nightshade ( Solanum scabrum ) seeds extracted from berries at different maturity stages. Seeds of accessions which showed contrasting germination responses depending on their maturity were submitted to a gel-based proteomic comparison followed by mass spectrometry in order to identify physiological differences between the seeds of different maturity stages and between the contrasting accessions. This study sought to answer the following questions: How does germination of seeds of African nightshade harvested from berries at the mature green stage and the ripe stage differ? How do the protein constituents vary in accessions of African nightshades that differ in germination percentages? Based on existing literature we hypothesised that the time of berry harvesting affects the germination of African nightshades, where the berries harvested at the later maturity stages have a higher germination percentage and accessions of African nightshade differ in their protein constitution irrespective of their maturation stage.
Seed source African nightshade seeds were collected from five counties in western Kenya. The selected counties were known to have high activities in the production, marketing, and consumption of African nightshades. The seeds were collected from farmers, local seed traders, and agro-veterinary shops that sell seeds from registered seed companies (e.g., Simlaw seed company). Accessions Abuku 1 and Abuku 2 were obtained from the JKUAT African Indigenous Vegetable (AIV) project in Kiambu county. Accessions Olevolosi and SS 40 were obtained from the World Vegetable Center (WVC) in Arusha, Tanzania. Accession 18 was collected from Simlaw seed company in Nakuru county while accessions 1 and 7 were collected from a farmer in Kakamega county, accession 33 from a farmer in Kisii and Acc3 from a farmer in Siaya county. The nine accessions were selected to represent the various seed sources available to the farmers in areas that are known for production of African nightshades (Table ). Field experiment for seed production The field experiment was conducted at the Jomo Kenyatta University of Agriculture and Technology farm block A (-1.09002690011, 37.010965343). The precipitation of the region is bimodal with the long rainy season starting in March until July, whereas the short rainy season lasts from September to November. The region shows an average rainfall of 1,129.8 mm, an average temperature of 20.3 °C (68.5 °F), and an altitude of 1490 m above sea level. The soils are predominantly heterogeneous clay loamy soils which are inherently fertile. The experiment was laid out in a Randomised Complete Block Design (RCBD) replicated 3 times as block A, B, and C. Each block had nine plots of 4 m x 2.4 m with 140 plants in each block. Berries were harvested from ten plants which were randomly selected per plot at two maturity stages: M1 = Mature green (at the onset of colour change from green to purple, Table ); M2 = fully ripe stage (when the fruits were fully ripe and had a deep purple colour, Table ). The berries at M1 stage were harvested from the first trusses 68 days after planting while the berries at M2 stage were harvested from the same trusses 80 days after sowing. The berries from the first trusses which flowered, set, developed and matured at the same time were selected, marked and harvested. Seeds from each of the samples were extracted from the berries immediately after harvesting, dried at 25 –30 °C to a constant water content of 8.7%, and stored at 20–30% relative humidity in an air tight glass jar at room temperature for three months. A germination test was not conducted at harvest, the first germination test was done three months after harvest in 2017 and 7 months for the second experiment in 2018. Germination assay Three replicates of 100 seeds each from M1 and M2 of nine African nightshade accessions were used in the germination assay (Table , for pictures of plants and berries see Figure ). 100 seeds each were placed in a plastic 9 cm Petri dish lined with Whatman cotton filter paper, and moistened with distilled water. The Petri dishes were then placed in a growth chamber at a constant temperature of 25 +/-1 °C in darkness . The experiment was laid out in a Completely Randomized Design (CRD). Observations were recorded on the germination percentage for each treatment. Germination was determined based on radicle emergence daily, starting 3 days after the onset of the experiment until day 14. Seed preparation for protein extraction Out of the nine S. scabrum accessions, three were selected for proteomic analysis based on germination percentage results. Two accessions were chosen with the minimum differences between the two maturity stages (Abuku 1 and Acc 33) and one accession that showed the highest difference in germination percentage between the two stages, and the overall lowest germination percentage (Olevolosi). Around 100 mg seeds from two maturity stages and three accessions were instantly frozen in liquid nitrogen (LN) in January 2018. Samples were then ground with a ball mill (MM 400, Retsch, VERDER Group, Netherlands) through stainless steel beads (7 mm diameter) in a reaction tube and either stored at -80 °C or immediately used for protein extraction. Phenol protein extraction The protein extraction was performed after Faurobert et al. (2007) . First, 750 µl of ice cold extraction buffer (700 mM sucrose, 500 mM Tris, 50 mM EDTA, 100 mM KCl, 2 ml 2% ß-mercaptoethanol, 1 ml 2% PMSF, in 100 ml ddH 2 O, pH adjusted to 8.0 with HCl) was added to homogenised seed powder. The solution was vortexed and afterwards incubated for 10 min on ice. Then, 750 µl of phenol was added to the samples. The samples were shaken at RT for 30 min. Afterwards samples were centrifuged for 10 min at 12,000 g at 4 °C. The upper phase was then transferred to a new reaction tube (~ 400 µl). The same volume of ice cold extraction buffer was added to the upper phase and vortexed. Samples were centrifuged for 10 min at 12,000 g and 4 °C. The upper phase was transferred to a new reaction tube. The sample was then filled up with precipitation solution (0.1 M ammonium acetate in methanol) and incubated overnight at -20 °C. On the morning of the next day, samples were centrifuged for 3 min at 15,000 g at 4 °C. Pellets were resuspended three times in 1 ml precipitation solution and centrifuged for 3 min at 15,000 g at 4 °C after each resuspention. Afterwards, samples were resuspended in 1 ml acetone solution (80% (v/v) acetone). Samples were then centrifuged for 3 min at 15,000 g at 4 °C. The supernatant solution was discarded and pellets were dried at RT under a fume hood. Finally, protein pellets were weighed and frozen at -80 °C until further use. 2D IEF/SDS-PAGE About 4 mg of protein pellet suspended in 350 µl rehydration buffer was used for a 2D gel electrophoresis. The samples were transferred to IEF strips (18 cm, pH 3–11 NL, GE Healthcare, Freiburg, Germany). Isoelectric focusing was performed according to Mihr et al. (2003) . A polyacrylamide gel (13.5 ml 49.5T/3 C acrylamide, 15 ml tricine gel buffer (3 M Tris, 0.3% (w/v) SDS, 6 ml 87% glycerine, 10.5 ml bidest H 2 O, 150 µl 10% APS and 15 µl TEMED) was poured between two glass plates (20 × 20 cm in size with 1 mm thickness of the gel). The IEF strips were equilibrated for 15 min in 40 ml equilibration solution (50 mM Tris-Cl (pH 8.8), 6 M Urea, 30% (v/v) glycerine, 2% (w/v) SDS, a small tip of bromophenol blue) containing 0.4 g dithiothreitol (DTT). A subsequent bath in 40 ml equilibration solution containing 1 g iodoacetamide (IAA) without DTT for 15 min ensued, which was followed by a washing step in tricine gel buffer. The IEF strips were then placed on top of the acrylamide gel and run for 18 h at max. 500 V and 30 mA per gel. Gel staining procedure Proteins were fixed in the gel for 2 h (15% ethanol, 10% acetic acid) and stained over night with Coomassie blue CBB G-250 (Merck, Darmstadt, Germany) in a solution containing 1% (w/v) ortho-phosphoric acid (85%), 10% (w/v) ammoniumsulfate, and 20% ethanol (v/v). Processing of gel images of different maturity stages of the berries Scanned images of Coomassie colloidial stained gels were analysed according to Berth et al. (2007) , using the Delta2D software 4.4 (Decodon, Greifswald, Germany). Three replicate gels per accession and maturity stage (M1 and M2) were analysed and spots were automatically detected. Minor corrections of gel disturbances were manually done. For determining significant differences in spot patterns between M1 and M2 stage berries within an accession, a Student´s t-test based on the normalised relative spot volume was performed (p-value ≤ 0.05). Additionally, only spots with a fold change higher than 1.5 were taken into consideration. Three individual comparisons were made between the different groups (M2 versus M1 Olevolosi; M2 versus M1 Abuku 1; and M2 versus M1 Acc33). PCA analysis can be found in Figure . Mass spectrometry analyses Based on spot ID and differences in spot volume, those spots were selected for picking, which had a high fold change in a single comparison or which showed up in more than one comparison. Excised protein spots were in-gel-digested with trypsin as described before and analysed by high pressure liquid chromatography (HPLC) electrospray (ESI) quadrupole (Q) time-of-flight (ToF) mass spectrometry (MS) using an Easy nano LC (Thermo scientific) coupled to a micrOTOF Q II (Bruker Daltonics) using parameters given in Klodmann et al. (2011) . Data processing was carried out with the ProteinScape 2.1 software (Bruker Daltonics). For protein identification, Solanum tuberosum protein sequences of “the working gene model set v6.1” (DM_1–3_516_R44_potato.v6.1.working_models.pep.fa.gz) were downloaded from Spud-DB ( http://spuddb.uga.edu/ ) on February 16th, 2021. Database search was carried out applying standard parameters, as given in Klodmann et al. (2011) . Complete list of proteins can be found in Table . Reference map (GelMap) To better visualise our protein data, an interactive reference map was created using GelMap ( www.gelmap.de ) (Fig. ). Therefore, the IEF-SDS gel of Olevolosi (M2) was used as a basis to indicate all identified proteins of all gels. The reference map is accessible via www.gelmap.de/2700 (password: Solanum2024). Statistical analyses Germination data was analysed for statistical significance by a two way Analysis of Variance (ANOVA). The data on germination percentage was arcsine transformed prior to analysis (data included in Table represent original values). Fischer’s least significant difference ( p < 0.05) was used to determine significant differences between accessions. The data was analysed using GenStat version 22.1 Edition package. Venn diagrams were prepared based on spot ID with the help of Software Venny Version 2.1.0 by Juan Carlos Oliveros ( https://bioinfogp.cnb.csic.es/tools/venny/ ). Heat maps were drawn with R (R version 4.3.3, R Core Team, 2024) in R Studio (version 2023.12.1.402, Posit Team, 2024) with library ´gplots´ .
African nightshade seeds were collected from five counties in western Kenya. The selected counties were known to have high activities in the production, marketing, and consumption of African nightshades. The seeds were collected from farmers, local seed traders, and agro-veterinary shops that sell seeds from registered seed companies (e.g., Simlaw seed company). Accessions Abuku 1 and Abuku 2 were obtained from the JKUAT African Indigenous Vegetable (AIV) project in Kiambu county. Accessions Olevolosi and SS 40 were obtained from the World Vegetable Center (WVC) in Arusha, Tanzania. Accession 18 was collected from Simlaw seed company in Nakuru county while accessions 1 and 7 were collected from a farmer in Kakamega county, accession 33 from a farmer in Kisii and Acc3 from a farmer in Siaya county. The nine accessions were selected to represent the various seed sources available to the farmers in areas that are known for production of African nightshades (Table ).
The field experiment was conducted at the Jomo Kenyatta University of Agriculture and Technology farm block A (-1.09002690011, 37.010965343). The precipitation of the region is bimodal with the long rainy season starting in March until July, whereas the short rainy season lasts from September to November. The region shows an average rainfall of 1,129.8 mm, an average temperature of 20.3 °C (68.5 °F), and an altitude of 1490 m above sea level. The soils are predominantly heterogeneous clay loamy soils which are inherently fertile. The experiment was laid out in a Randomised Complete Block Design (RCBD) replicated 3 times as block A, B, and C. Each block had nine plots of 4 m x 2.4 m with 140 plants in each block. Berries were harvested from ten plants which were randomly selected per plot at two maturity stages: M1 = Mature green (at the onset of colour change from green to purple, Table ); M2 = fully ripe stage (when the fruits were fully ripe and had a deep purple colour, Table ). The berries at M1 stage were harvested from the first trusses 68 days after planting while the berries at M2 stage were harvested from the same trusses 80 days after sowing. The berries from the first trusses which flowered, set, developed and matured at the same time were selected, marked and harvested. Seeds from each of the samples were extracted from the berries immediately after harvesting, dried at 25 –30 °C to a constant water content of 8.7%, and stored at 20–30% relative humidity in an air tight glass jar at room temperature for three months. A germination test was not conducted at harvest, the first germination test was done three months after harvest in 2017 and 7 months for the second experiment in 2018.
Three replicates of 100 seeds each from M1 and M2 of nine African nightshade accessions were used in the germination assay (Table , for pictures of plants and berries see Figure ). 100 seeds each were placed in a plastic 9 cm Petri dish lined with Whatman cotton filter paper, and moistened with distilled water. The Petri dishes were then placed in a growth chamber at a constant temperature of 25 +/-1 °C in darkness . The experiment was laid out in a Completely Randomized Design (CRD). Observations were recorded on the germination percentage for each treatment. Germination was determined based on radicle emergence daily, starting 3 days after the onset of the experiment until day 14.
Out of the nine S. scabrum accessions, three were selected for proteomic analysis based on germination percentage results. Two accessions were chosen with the minimum differences between the two maturity stages (Abuku 1 and Acc 33) and one accession that showed the highest difference in germination percentage between the two stages, and the overall lowest germination percentage (Olevolosi). Around 100 mg seeds from two maturity stages and three accessions were instantly frozen in liquid nitrogen (LN) in January 2018. Samples were then ground with a ball mill (MM 400, Retsch, VERDER Group, Netherlands) through stainless steel beads (7 mm diameter) in a reaction tube and either stored at -80 °C or immediately used for protein extraction.
The protein extraction was performed after Faurobert et al. (2007) . First, 750 µl of ice cold extraction buffer (700 mM sucrose, 500 mM Tris, 50 mM EDTA, 100 mM KCl, 2 ml 2% ß-mercaptoethanol, 1 ml 2% PMSF, in 100 ml ddH 2 O, pH adjusted to 8.0 with HCl) was added to homogenised seed powder. The solution was vortexed and afterwards incubated for 10 min on ice. Then, 750 µl of phenol was added to the samples. The samples were shaken at RT for 30 min. Afterwards samples were centrifuged for 10 min at 12,000 g at 4 °C. The upper phase was then transferred to a new reaction tube (~ 400 µl). The same volume of ice cold extraction buffer was added to the upper phase and vortexed. Samples were centrifuged for 10 min at 12,000 g and 4 °C. The upper phase was transferred to a new reaction tube. The sample was then filled up with precipitation solution (0.1 M ammonium acetate in methanol) and incubated overnight at -20 °C. On the morning of the next day, samples were centrifuged for 3 min at 15,000 g at 4 °C. Pellets were resuspended three times in 1 ml precipitation solution and centrifuged for 3 min at 15,000 g at 4 °C after each resuspention. Afterwards, samples were resuspended in 1 ml acetone solution (80% (v/v) acetone). Samples were then centrifuged for 3 min at 15,000 g at 4 °C. The supernatant solution was discarded and pellets were dried at RT under a fume hood. Finally, protein pellets were weighed and frozen at -80 °C until further use.
About 4 mg of protein pellet suspended in 350 µl rehydration buffer was used for a 2D gel electrophoresis. The samples were transferred to IEF strips (18 cm, pH 3–11 NL, GE Healthcare, Freiburg, Germany). Isoelectric focusing was performed according to Mihr et al. (2003) . A polyacrylamide gel (13.5 ml 49.5T/3 C acrylamide, 15 ml tricine gel buffer (3 M Tris, 0.3% (w/v) SDS, 6 ml 87% glycerine, 10.5 ml bidest H 2 O, 150 µl 10% APS and 15 µl TEMED) was poured between two glass plates (20 × 20 cm in size with 1 mm thickness of the gel). The IEF strips were equilibrated for 15 min in 40 ml equilibration solution (50 mM Tris-Cl (pH 8.8), 6 M Urea, 30% (v/v) glycerine, 2% (w/v) SDS, a small tip of bromophenol blue) containing 0.4 g dithiothreitol (DTT). A subsequent bath in 40 ml equilibration solution containing 1 g iodoacetamide (IAA) without DTT for 15 min ensued, which was followed by a washing step in tricine gel buffer. The IEF strips were then placed on top of the acrylamide gel and run for 18 h at max. 500 V and 30 mA per gel.
Proteins were fixed in the gel for 2 h (15% ethanol, 10% acetic acid) and stained over night with Coomassie blue CBB G-250 (Merck, Darmstadt, Germany) in a solution containing 1% (w/v) ortho-phosphoric acid (85%), 10% (w/v) ammoniumsulfate, and 20% ethanol (v/v).
Scanned images of Coomassie colloidial stained gels were analysed according to Berth et al. (2007) , using the Delta2D software 4.4 (Decodon, Greifswald, Germany). Three replicate gels per accession and maturity stage (M1 and M2) were analysed and spots were automatically detected. Minor corrections of gel disturbances were manually done. For determining significant differences in spot patterns between M1 and M2 stage berries within an accession, a Student´s t-test based on the normalised relative spot volume was performed (p-value ≤ 0.05). Additionally, only spots with a fold change higher than 1.5 were taken into consideration. Three individual comparisons were made between the different groups (M2 versus M1 Olevolosi; M2 versus M1 Abuku 1; and M2 versus M1 Acc33). PCA analysis can be found in Figure .
Based on spot ID and differences in spot volume, those spots were selected for picking, which had a high fold change in a single comparison or which showed up in more than one comparison. Excised protein spots were in-gel-digested with trypsin as described before and analysed by high pressure liquid chromatography (HPLC) electrospray (ESI) quadrupole (Q) time-of-flight (ToF) mass spectrometry (MS) using an Easy nano LC (Thermo scientific) coupled to a micrOTOF Q II (Bruker Daltonics) using parameters given in Klodmann et al. (2011) . Data processing was carried out with the ProteinScape 2.1 software (Bruker Daltonics). For protein identification, Solanum tuberosum protein sequences of “the working gene model set v6.1” (DM_1–3_516_R44_potato.v6.1.working_models.pep.fa.gz) were downloaded from Spud-DB ( http://spuddb.uga.edu/ ) on February 16th, 2021. Database search was carried out applying standard parameters, as given in Klodmann et al. (2011) . Complete list of proteins can be found in Table .
To better visualise our protein data, an interactive reference map was created using GelMap ( www.gelmap.de ) (Fig. ). Therefore, the IEF-SDS gel of Olevolosi (M2) was used as a basis to indicate all identified proteins of all gels. The reference map is accessible via www.gelmap.de/2700 (password: Solanum2024).
Germination data was analysed for statistical significance by a two way Analysis of Variance (ANOVA). The data on germination percentage was arcsine transformed prior to analysis (data included in Table represent original values). Fischer’s least significant difference ( p < 0.05) was used to determine significant differences between accessions. The data was analysed using GenStat version 22.1 Edition package. Venn diagrams were prepared based on spot ID with the help of Software Venny Version 2.1.0 by Juan Carlos Oliveros ( https://bioinfogp.cnb.csic.es/tools/venny/ ). Heat maps were drawn with R (R version 4.3.3, R Core Team, 2024) in R Studio (version 2023.12.1.402, Posit Team, 2024) with library ´gplots´ .
Germination assays The data on seed germination for nine different seed accessions of S. scabrum harvested at two development stages is presented in Table . There was a significant effect ( p < 0.05) of the maturity stage, the accession and their interaction (Table ). As evident, seeds harvested at the ripe stage (M2) recorded higher germination percentages compared to seeds harvested at the mature green stage (M1), for both experiments conducted in 2017 and 2018. Accession 1 exhibited the highest range in germination percentage (15%) with a maximum of 46% and a minimum of 31% in 2017, and 14% (maximum of 39% and a minimum of 25%) in 2018 for the M1 seeds. Olevolosi recorded the lowest range in germination percentage of 2% in both 2017 and 2018. For the seeds harvested at the ripe stage, Accession 18 exhibited the highest range of 10% with a maximum of 86% and a minimum of 76% in 2017 while Abuku 2 recorded the highest range 7% in germination percentage in 2018. Abuku I had the lowest range of 2% in germination percentage in both years. The Mean Germination Time (MGT) of the M1 seeds was higher than that of seeds harvested at the M2 stage. At the M1 stage Accessions 33 and Abuku 1 showed the least average time needed (5.38 and 5.22 days) for germination in 2017 and 2018, respectively (Table ). For the seeds harvested at the M2 stage, Accession 33 recorded the least average time needed for germination (4.78, 4.46 days in 2017 and 2018, respectively). Olevolosi recorded the longest average time for germination for both M1 and M2 seeds, in 2017 and 2018 (Table ). The Mean Germination Rate (MGR) of the seed accessions harvested at the M2 stage was greater than that of the seeds harvested at the M1 stage. Abuku 1 recorded a greater proportion of seeds germinating within the given period of time while Olevolosi recorded the least proportion of germinated seeds within the period of time, for both M1 and M2 in 2017 and 2018. On basis of the statistical analysis of the germination assays, three accessions were chosen for further investigation through proteomic analysis. The three include Olevolosi as the accession with the lowest germination percentage and Abuku 1 and Acc 33, as they both show high germination percentages for the M2 stage, however vary at the M1 stage, where Acc 33 displayed lower germination. While there was no significant difference in germination percentage ( p < 0.05) between Abuku 1 and Acc 33 in 2017, Olevolosi showed significantly lower germination percentages than the other accessions (Abuku 1 and Acc 33) in both experiments for seeds harvested at the M2 stage. Proteomic analyses of selected spots Proteomic analyses were performed to determine proteins changing in abundance as an effect of maturity stage at harvest of S. scabrum seeds. Gel analysis displayed a total of 563 spots, 108 differentially abundant spots (DAS) for Olevolosi (higher abundant in M1 = 43; in M2 = 65), 126 DAS for Acc 33 (higher abundant in M1 = 70; in M2 = 56), and 130 DAS for Abuku 1 (higher abundant in M1 = 60; in M2 = 70) (Table ). Based on the spot ID number, a separate comparison of the three accessions for M1 and M2 was performed (Fig. ). The majority of DAS was found to be specific for each accession (up to 34.9% of total DAS in Acc 33 M1). However, minor fractions of the identified spots were also found in overlaps of the accessions. Acc 33 and Abuku 1, which represent the more readily germinating accessions, displayed an overlap of 11 spots (7.4% of DAS) for M1 and 5 spots (3.0% of DAS) for M2. Comparison between Olevolosi, which represents the lowest germinating accession, and Acc 33 revealed 6 overlapping spots (4% of DAS) and Olevolosi compared to Abuku 1 showed 4 spots (2.7% of DAS) for M1 and 6 spots (3.6% of DAS) for M2. All three accessions displayed an overlap of only 1 spot (0.7% of DAS) for M1 and 3 spots (6.1% of DAS) for M2. Spot selection for picking from the gels was based on high regulation and Venn comparison. Therefore, only those spots were extracted from gels with high regulation, and spots which were present in two accessions with the same regulation as well as spots which based on their ID were overlapping between accessions. Thus, for MS analysis, a total of 91 spots was picked. The total number of identified spots was 75, representing a 81.4% identification rate. In total, 206 proteins were identified in these 75 spots (Table ). With the publicly available tool GelMap, a protein reference map was generated, which may be complemented and used in future research projects dealing with seed proteins of African nightshades (Fig. ). The identified proteins were sorted according to their function based on KEGG functions. Proteins associated with the metabolism of co-factors and vitamins, metabolism of pyruvate, and the metabolism of flavonoid biosynthesis were found in seeds at the M2 stage, but not at the M1 stage, while other proteins associated with the metabolic citric acid cycle and oleosomes were present in M1 and were missing in the M2 stage in all three accessions (Fig. ). Overall, most proteins were assigned to be seed storage proteins with increased numbers at M2 and higher numbers for Oleovosi, followed by the functional classes “metabolism – hydrolases”, “genetic information processing” and “metabolism – carbohydrate metabolism”. In total, 22 different functions were assigned (Fig. ). Proteins categorised in seed storage were further analysed. Identification of proteins from DAS displayed a high number of seed storage proteins in the low germinating accession Olevolosi Data processing displayed 4 different types of seed storage proteins in the picked spots. These included RmlC-like cupins superfamily protein, cruciferin, cupin family protein and vicilin (Table ; Fig. ). The highest abundance within the DAS was found in Olevolosi in form of RmlC-like cupins superfamily protein (15 spots M1, 16 spots M2). This was also the dominant seed storage protein for Acc 33 (7 spots M1, 4 spots M2) and for Abuku 1 (5 spots for M1, 6 spots for M2), but with a lower number of DAS. Cruciferin was the second protein group in terms of numbers within DAS of Olevolosi (4 spots in M1, 6 spots in M2), Acc 33 (2 spots in M1, 4 spots in M2) and Abuku 1 (3 spots in M1, 5 spots in M2). The cupin family protein was detected in Olevolosi (1 spots in M1, 4 spots in M2), Acc 33 (0 spots in M1, 2 spots in M2) and Abuku 1 (3 spots in M1, 1 spots in M2). Vicilin was only found to be present in Acc 33 M1 and in Abuku 1 M2 (Table ). However, numbers are based on selected DAS. If the analysis of the whole proteome would display a different picture is still unclear. Regulation analysis of identified seed storage proteins Spot IDs for the calculation of the heat map in Fig. were based upon identification of seed storage proteins within the list of identified spots for each gel. Only those spots, which were displaying at least one potential seed storage protein, were used and their regulation was compared in the heat map. The heat map is based upon abundance of normalised spot volumes M2/M1 for seed storage proteins that displayed changes as seeds transitioned from M1 to M2 stage. For example, spot 421 was significantly higher abundant in M2 stage seeds than in M1 stage seeds of accession Olevolosi (Fig. ). Some spots, like ID 400, showed a transition of accumulating storage proteins as seeds changed from mature green stage to the ripe stage (Tables S3 and S4). The heatmap displayed a hierachical clustering with a clear separation of the well germinating and the the low germinating accessions, when only the abundance of the spots containing seed storage proteins were analysed. Therefore, differences in seed storage protein composition between the accessions at the two maturity stages (M1 and M2) might display one reason, why accessions differed in germination rates. Heat map of interesting spots beyond the seed storage proteins displayed clustering of the well germinating accessions After extracting the seed storage protein entries from the data, the remaining data was analysed to identify the proteins differing in regulation between the well and low germinating accessions. For this purpose, proteins were identified, which either displayed an overlap according to the Venn diagrams (Fig. ), were found to be higher abundant in the well germinating accessions in M1 and were then only identified in the M2 seeds of the low germinating accessions (and vice versa), and also those proteins, which displayed a high abundance in the high or low germinating accessions in only one of the maturity stages. The selected spots were then displayed in a heat map for their regulation based upon the M2/M1 ratio of spot abundance (Fig. ). Again, the abundance of the selected spot IDs allowed a clustering of the accessions in well and low germinating accession. Proteins which display a known seed maturation function were also identified e.g. LEA proteins (late embryogenesis abundant protein (LEA) family protein (higher abundance in Acc33 M2: spot ID 175; Ole M2: spot ID 166, 169, 175), late embryogenesis abundant domain-containing protein / LEA domain-containing protein (higher abundance in Acc33 M2: ID 79 and 330; Ole M2: spot ID 79, 91, 92, 330, 333) and Major latex -like proteins (MLPs; higher abundance in Acc33 M1: spot ID 553, Acc33 M2: Spot ID 203, 204; Abuku 1 M1: spot ID 553; Abuku M2: spot ID 203, 204; Ole M2: spot ID 203, 204). However, as these were present in all accessions based on DAS analysed, differences in germination might not be explained by these. Further proteins which were identified (excluding the seed storage proteins) comprised e.g. phosphoglycerate kinase (higher abundance in Acc33 M1: 130, 131, 421; Ole M2: 130, 131, 421), glutamate decarboxylase (higher abundance in Abuku 1 M1: spot ID 95; Ole M2: spot ID 95), eukaryotic translation initiation factor 4A1 (higher abundance in Acc33 M1: spot ID 121; Abuku 1 M1: spot ID 121) and an oleosin family protein (higher abundance in Acc33 M1: spot ID 562, Abuku 1 M1: spot ID 562).
The data on seed germination for nine different seed accessions of S. scabrum harvested at two development stages is presented in Table . There was a significant effect ( p < 0.05) of the maturity stage, the accession and their interaction (Table ). As evident, seeds harvested at the ripe stage (M2) recorded higher germination percentages compared to seeds harvested at the mature green stage (M1), for both experiments conducted in 2017 and 2018. Accession 1 exhibited the highest range in germination percentage (15%) with a maximum of 46% and a minimum of 31% in 2017, and 14% (maximum of 39% and a minimum of 25%) in 2018 for the M1 seeds. Olevolosi recorded the lowest range in germination percentage of 2% in both 2017 and 2018. For the seeds harvested at the ripe stage, Accession 18 exhibited the highest range of 10% with a maximum of 86% and a minimum of 76% in 2017 while Abuku 2 recorded the highest range 7% in germination percentage in 2018. Abuku I had the lowest range of 2% in germination percentage in both years. The Mean Germination Time (MGT) of the M1 seeds was higher than that of seeds harvested at the M2 stage. At the M1 stage Accessions 33 and Abuku 1 showed the least average time needed (5.38 and 5.22 days) for germination in 2017 and 2018, respectively (Table ). For the seeds harvested at the M2 stage, Accession 33 recorded the least average time needed for germination (4.78, 4.46 days in 2017 and 2018, respectively). Olevolosi recorded the longest average time for germination for both M1 and M2 seeds, in 2017 and 2018 (Table ). The Mean Germination Rate (MGR) of the seed accessions harvested at the M2 stage was greater than that of the seeds harvested at the M1 stage. Abuku 1 recorded a greater proportion of seeds germinating within the given period of time while Olevolosi recorded the least proportion of germinated seeds within the period of time, for both M1 and M2 in 2017 and 2018. On basis of the statistical analysis of the germination assays, three accessions were chosen for further investigation through proteomic analysis. The three include Olevolosi as the accession with the lowest germination percentage and Abuku 1 and Acc 33, as they both show high germination percentages for the M2 stage, however vary at the M1 stage, where Acc 33 displayed lower germination. While there was no significant difference in germination percentage ( p < 0.05) between Abuku 1 and Acc 33 in 2017, Olevolosi showed significantly lower germination percentages than the other accessions (Abuku 1 and Acc 33) in both experiments for seeds harvested at the M2 stage.
Proteomic analyses were performed to determine proteins changing in abundance as an effect of maturity stage at harvest of S. scabrum seeds. Gel analysis displayed a total of 563 spots, 108 differentially abundant spots (DAS) for Olevolosi (higher abundant in M1 = 43; in M2 = 65), 126 DAS for Acc 33 (higher abundant in M1 = 70; in M2 = 56), and 130 DAS for Abuku 1 (higher abundant in M1 = 60; in M2 = 70) (Table ). Based on the spot ID number, a separate comparison of the three accessions for M1 and M2 was performed (Fig. ). The majority of DAS was found to be specific for each accession (up to 34.9% of total DAS in Acc 33 M1). However, minor fractions of the identified spots were also found in overlaps of the accessions. Acc 33 and Abuku 1, which represent the more readily germinating accessions, displayed an overlap of 11 spots (7.4% of DAS) for M1 and 5 spots (3.0% of DAS) for M2. Comparison between Olevolosi, which represents the lowest germinating accession, and Acc 33 revealed 6 overlapping spots (4% of DAS) and Olevolosi compared to Abuku 1 showed 4 spots (2.7% of DAS) for M1 and 6 spots (3.6% of DAS) for M2. All three accessions displayed an overlap of only 1 spot (0.7% of DAS) for M1 and 3 spots (6.1% of DAS) for M2. Spot selection for picking from the gels was based on high regulation and Venn comparison. Therefore, only those spots were extracted from gels with high regulation, and spots which were present in two accessions with the same regulation as well as spots which based on their ID were overlapping between accessions. Thus, for MS analysis, a total of 91 spots was picked. The total number of identified spots was 75, representing a 81.4% identification rate. In total, 206 proteins were identified in these 75 spots (Table ). With the publicly available tool GelMap, a protein reference map was generated, which may be complemented and used in future research projects dealing with seed proteins of African nightshades (Fig. ). The identified proteins were sorted according to their function based on KEGG functions. Proteins associated with the metabolism of co-factors and vitamins, metabolism of pyruvate, and the metabolism of flavonoid biosynthesis were found in seeds at the M2 stage, but not at the M1 stage, while other proteins associated with the metabolic citric acid cycle and oleosomes were present in M1 and were missing in the M2 stage in all three accessions (Fig. ). Overall, most proteins were assigned to be seed storage proteins with increased numbers at M2 and higher numbers for Oleovosi, followed by the functional classes “metabolism – hydrolases”, “genetic information processing” and “metabolism – carbohydrate metabolism”. In total, 22 different functions were assigned (Fig. ). Proteins categorised in seed storage were further analysed.
Data processing displayed 4 different types of seed storage proteins in the picked spots. These included RmlC-like cupins superfamily protein, cruciferin, cupin family protein and vicilin (Table ; Fig. ). The highest abundance within the DAS was found in Olevolosi in form of RmlC-like cupins superfamily protein (15 spots M1, 16 spots M2). This was also the dominant seed storage protein for Acc 33 (7 spots M1, 4 spots M2) and for Abuku 1 (5 spots for M1, 6 spots for M2), but with a lower number of DAS. Cruciferin was the second protein group in terms of numbers within DAS of Olevolosi (4 spots in M1, 6 spots in M2), Acc 33 (2 spots in M1, 4 spots in M2) and Abuku 1 (3 spots in M1, 5 spots in M2). The cupin family protein was detected in Olevolosi (1 spots in M1, 4 spots in M2), Acc 33 (0 spots in M1, 2 spots in M2) and Abuku 1 (3 spots in M1, 1 spots in M2). Vicilin was only found to be present in Acc 33 M1 and in Abuku 1 M2 (Table ). However, numbers are based on selected DAS. If the analysis of the whole proteome would display a different picture is still unclear.
Spot IDs for the calculation of the heat map in Fig. were based upon identification of seed storage proteins within the list of identified spots for each gel. Only those spots, which were displaying at least one potential seed storage protein, were used and their regulation was compared in the heat map. The heat map is based upon abundance of normalised spot volumes M2/M1 for seed storage proteins that displayed changes as seeds transitioned from M1 to M2 stage. For example, spot 421 was significantly higher abundant in M2 stage seeds than in M1 stage seeds of accession Olevolosi (Fig. ). Some spots, like ID 400, showed a transition of accumulating storage proteins as seeds changed from mature green stage to the ripe stage (Tables S3 and S4). The heatmap displayed a hierachical clustering with a clear separation of the well germinating and the the low germinating accessions, when only the abundance of the spots containing seed storage proteins were analysed. Therefore, differences in seed storage protein composition between the accessions at the two maturity stages (M1 and M2) might display one reason, why accessions differed in germination rates.
After extracting the seed storage protein entries from the data, the remaining data was analysed to identify the proteins differing in regulation between the well and low germinating accessions. For this purpose, proteins were identified, which either displayed an overlap according to the Venn diagrams (Fig. ), were found to be higher abundant in the well germinating accessions in M1 and were then only identified in the M2 seeds of the low germinating accessions (and vice versa), and also those proteins, which displayed a high abundance in the high or low germinating accessions in only one of the maturity stages. The selected spots were then displayed in a heat map for their regulation based upon the M2/M1 ratio of spot abundance (Fig. ). Again, the abundance of the selected spot IDs allowed a clustering of the accessions in well and low germinating accession. Proteins which display a known seed maturation function were also identified e.g. LEA proteins (late embryogenesis abundant protein (LEA) family protein (higher abundance in Acc33 M2: spot ID 175; Ole M2: spot ID 166, 169, 175), late embryogenesis abundant domain-containing protein / LEA domain-containing protein (higher abundance in Acc33 M2: ID 79 and 330; Ole M2: spot ID 79, 91, 92, 330, 333) and Major latex -like proteins (MLPs; higher abundance in Acc33 M1: spot ID 553, Acc33 M2: Spot ID 203, 204; Abuku 1 M1: spot ID 553; Abuku M2: spot ID 203, 204; Ole M2: spot ID 203, 204). However, as these were present in all accessions based on DAS analysed, differences in germination might not be explained by these. Further proteins which were identified (excluding the seed storage proteins) comprised e.g. phosphoglycerate kinase (higher abundance in Acc33 M1: 130, 131, 421; Ole M2: 130, 131, 421), glutamate decarboxylase (higher abundance in Abuku 1 M1: spot ID 95; Ole M2: spot ID 95), eukaryotic translation initiation factor 4A1 (higher abundance in Acc33 M1: spot ID 121; Abuku 1 M1: spot ID 121) and an oleosin family protein (higher abundance in Acc33 M1: spot ID 562, Abuku 1 M1: spot ID 562).
Colour change indicates ripening in S. scabrum accessions The stage of maturity had a significant effect ( p < 0.05), on seed germination for all accessions of African nightshade analysed in this study (Table ). These findings are in accordance with Tetteh et al. (2018) who reported that tomato seeds harvested at later stages of maturity germinated better than those harvested at earlier stages of maturity. The significantly higher germination percentages recorded in seeds harvested at the purple maturity stage (M2) may be due to the completion of the development of seed organs and maximum dry matter accumulation in the seeds as compared to the seeds harvested earlier at the mature green stage (M1). These results conform with Valdes et al. (1998) , who reported this also for tomato fruits. However, the nightshade seed germinability did not change during the storage. The Abuku 1 and Acc33 seeds displayed a high germination rate at the two maturity stages, the germination percentage observed for the two well-germinating accessions was above 85%, which is the recommended percentage for high-quality seeds . However, Olevolosi displayed significantly lower germination percentages at the two maturity stages and did not reach the value for high-quality seeds (in assay 1: 7.3% against 76.3% and 87.0% for Acc 33 and Abuku 1 at M1 and 67.3% against 98.7% and 98.0% for Acc 33 and Abuku 1, respectively at M2 stage, Table ). The well germinating accessions, Abuku 1 and Acc 33 recorded a higher germination percentage compared to Olevolosi. Berries of S. scabrum display a colour change from green to purple as berries advance in age (see Table , pictures for maturity stages). The protein data based on LEA proteins, MPLs and proteins found during seed maturation and drying stages also suggests, that seeds are more mature and start to ripen at the M2 stage (Fig. , Table , see further discussion). Similarly in tomatoes, it was shown that high-quality tomato seeds could be obtained from half-ripe and fully-ripe berry stages of different accessions based on the colour change of the berries (from green to red), leading to a high germination percentage and seedling emergence . In contrast, a study by Ahmed et al. (2018) confirmed that paprika berries harvested at the red ripe stage could give rise to superior quality seeds unlike those harvested at dark green and colour breaker stage. The superior seeds of fruits harvested at the red stage were attributed a physiological maturity of seeds which might be related to increased accumulation and assimilation of reserves from source to sink. Therefore, colour change might be a good visual criterion for farmers to detect seed maturity in S. scabrum . Nevertheless, the low germination rate of Olevolosi M1 might also be explainable through this, as other accessions might already be further along in their ripening process and therefore are displaying a higher germination percentage at M1 stage. Identification of proteins displayed a high number of seed storage proteins within the DAS in the low germinating accession Olevolosi Seed storage proteins were the predominant protein group within the DAS found in the analysed African nightshade seeds (Figs. and ). Hay et al. (2017) reported that proteins accumulate significantly in the developing seed, whose main function was to act as a storage reserve for nitrogen, carbon, and sulphur and these proteins are rapidly mobilised during seed germination. Seed storage proteins were previously classified based on their solubility and are traditionally sorted into families and superfamilies . However, recently these proteins have been categorised into superfamilies based on sequence information and amino acid conservation. Four different types of seed storage proteins were identified in the analysed S. scabrum seeds. These included RmlC-like cupins superfamily protein, cruciferin, cupin family protein, and vicilin (Table ; Fig. ). In accordance with the findings of Koshiyama et al. (1983) , RmlC-like cupins superfamily protein was the most abundant in African nightshade seeds. These proteins belong to the 11–12 S globulins and are the most abundant seed storage proteins among higher plants that are synthesised during seed maturation on the mother plant . Despite RmlC-like cupins superfamily proteins being the most abundant storage proteins within the DAS in the African nightshade seeds, their role in seed maturity and development is not as well documented as their roles in other aspects of plant development and metabolism (Table ; Fig. ). However, the specific role of RmlC-like cupins in seed germination can vary depending on the plant species and the particular cupin protein in question. The functionally diverse superfamily can be divided in enzymatically active and non-active proteins . Also within the enzymatic active group of proteins, the functions are diverse, ranging from metal binding or sugar binding proteins to sugar isomerases (epimerases), oxalate oxidases (OXOs), superoxide dismutases (SODs) and many other functions . However, further investigations would have to take place to characterise the functions of the specific proteins found in this study. Cruciferin was the second most abundant seed storage protein within Olevolosi (4 spots in M1, 6 spots in M2), Acc 33 (2 spots in M1, 4 spots in M2), and Abuku 1 (3 spots in M1, 5 spots in M2) (Table ). Cruciferins are also one group of 11 S globulins, which are known to accumulate during the seed filling phase, e.g. in rape seeds , and serve as a source of nitrogen and amino acids for the germinating embryo . However, in mature dry seeds of Brassica napus , levels of cruciferin were hardly detectable and this was attributed to higher turnover of cruciferin in later seed development . This might indicate, that even in M2 stage, seeds of S. scabrum might still be metabolically active and not fully mature yet. However, this would have to be tested further. Other cupin family proteins were also detected in Olevolosi (1 spots in M1, 4 spots in M2), Acc 33 (0 spots in M1, 2 spots in M2), and Abuku 1 (3 spots in M1, 1 spots in M2), showing that these proteins were the most abundant seed storage proteins within the DAS analysed in S. scabrum seeds. Vicilin, however, was only found to be present in Acc 33 M1 and in Abuku 1 M2 seeds (Table ). Vicilin proteins are 7 S globulins and have diverse functions including but not limited to seed desiccation tolerance and plant defences against fungi and microbes , . In a proteomic analysis of germinating tomato seeds, these were found to be highly abundant in the embryo as well as the endosperm . It might therefore be interesting to analyse the changes in seed storage proteins during germination of S. scabrum seeds for a comparison to other Solanum species. General maturation patterns indicated by LEA proteins and MLPs Maturation is an essential step in seed development that is characterised by a decline in reserve synthesis, acquisition of desiccation tolerance, and dormancy. The heat map based upon abundance of the normalised spot volume M2/M1 for the interesting spots (excluding seed storage proteins) showed changes as seeds transitioned from M1 to M2 stage (Fig. , see also Fig. and GelMap online ( www.gelmap.de/2700 )). Several proteins were identified, which are known to be of importance as seeds progress in maturation. Proteins belonging to this category included late embryogenesis abundant (LEA) family proteins as well as MLP-like proteins (major latex proteins). LEAs were highly abundant in the seeds harvested at the ripe stage (Olevolosi and Acc33) and were not detected based on regulation in the seeds harvested at the mature green stage. However, they were highly abundant for the accession Olevolosi at M2 (spots 79, 91, 92, 166, 169, 175, 330 and 333, M2/M1 ratio: 1.65–44.25) and were also found in Acc 33 (spots 79, 175 and 330, M2/M1 ratio: 1.92–6.36), but were not identified within the DAS in Abuku 1 seeds. LEA proteins accumulate during late seed developmental stages and play an important role during seed drying as they confer desiccation tolerance of seeds , . However, further time points during seed ripening would have to be analysed. Major latex-like proteins (MLP-like proteins) were also identified as DAS in all accessions (spots 203, 204 and 553). Nonetheless, only in the well germinating accessions were these proteins also found in higher abundance at the M1 stage (spot ID 553) than in M2. One MLP-like protein (MLP-like protein 43) was identified to be of major importance for drought tolerance. A knockout-mutant of Arabidopsis thaliana was sensitive to drought, whereas overexpression lines were shown to be drought tolerant . Therefore, an increase in MLP-like protein in later stages of seed maturation might also confer desiccation tolerance in the seeds of the analysed accessions. As an MLP-like protein was already identified within the DAS in the well germinating accession in the M1 phase, this might indicate, that the well germinating accessions were already in a later stage of seed maturation, namely seed desiccation. Delay in seed maturation of low germinating accession Olevolosi include a postponed accumulation of PGK, GAD and glycosyl hydrolase Phosphoglycerate kinase (PGK; E.C. 2.7.2.3) was identified in DAS of M1 seeds of Acc33 (spots 130, 131, 421) as well as in Olevolosi M2 seeds (spots 130,131, 421). Plant genomes contain 3 or more PGK genes (Fig. , see also Fig. and GelMap online ( www.gelmap.de/2700 )). PGK is an enzyme involved not only in photosynthesis, but also plays a major role in glycolysis and gluconeogenesis . It was reported, that in seeds of e.g. Glycine max and Brassica napus the abundance of PGK decreased with commencing seed maturation. Agrawal et al. (2008) stated that in the stage of seed drying, PGK abundance was very low. As a higher abundance of PGK was identified in Acc33 M1, but for the low germinating accession Olevolosi it was higher abundant in the M2 seeds, this might also display a difference in timing of seed maturations between the accessions with Oleovolosi reaching maturity later. The identification of glutamate decarboxylase (GAD) also underlines the hypothesis, that Olevolosi seeds might take longer or are delayed in ripening. GAD in seeds acts as a metabolic link between carbon and nitrogen metabolism by catalysing the unidirectional decarboxylation of glutamate to form γ-aminobutyric acid (GABA) . In African nightshade seeds, GAD was found to be higher abundant in Abuku 1 in the M1 stage and in Olevolosi in seeds of the M2 stage. Angelovici et al. (2010) recorded a shift in Arabidopsis thaliana seeds during transition from reserve accumulation stage to desiccation from a general decrease in unbound metabolites to the accumulation of a set of specific metabolites, including γ-aminobutyric acid. Furthermore, Fait et al. (2011) reported that GABA content is differentially regulated during the late seed maturation-to-desiccation stage, and its accumulation is indicative of a shift toward N metabolism. In Solanum lycopersicum berries, up-regulation of the GABA shunt led to the alteration of storage reserve accumulation and fatty acid metabolism, which are linked to seed filling . However, it was reported, that in tomato fruits GABA was increased after flowering, reached a peak at the mature green berry stage and rapidly declined after the breaker stage , , . However, if this holds true for fruits of S. scabrum has to be further analysed. Taken together, the low germination in Olevolosi could be attributed to the delayed maturity in this accession as seen in the difference in the abundance of PGK and GAD which are associated with the late stages of seed maturation and that maturity can not be identified by berry colour alone in this accession. Glycosyl hydrolase family proteins are diverse in function. However, they can function as glucosidases (EC 3.2.1.39), which are associated with germination and seed maturation . Sequence analysis revealed, that Soltu.DM.06G029160.1 (spots 60, 510, 512 and 454) include putative glucosidases. A previous study in Arabidopsis showed that the expression of some genes in the glycosyl hydrolase family were detected in dry seeds and induced upon germination . In the well germinating accessions Acc 33 and Abuku 1, glycosyl hydrolase family proteins these were highly abundant for seeds in the M1 stage. However, for the low germinating accession Olevolosi glycosyl hydrolases were also identified in the M2 stage. However, due to the scarce information on these proteins regarding the stages of seed maturation, further experiments would be needed to link these to the maturation stages in S. scabrum . The well germinating accession displayed unique responses for higher abundant proteins in seeds of the M1 stage (Figs. and and GelMap online ( www.gelmap.de/2700 )). The eukaryotic translation factor 4A1 (eIF4A1) was found to be higher abundant only in the well-germinating accessions in M1 seeds. This protein is fundamental in gene expression. It is an ATP-dependant RNA helicase and it plays a major function in unwinding RNA secondary structure, leading to ribosomal binding and translation . Further, these proteins stimulate stress-induced pathways that mediate salinity stress tolerance . The overexpression of translation factor 4 A in peanut ( Arachis hypogaea ) was found to improve drought, salinity, and oxidative stress tolerance . However, as eIF4A1 was only found to be higher abundant in M1 seeds, this might hint to a decline in translation activity in the M2 seeds, which would correlated with the previous described findings regarding PGK, GAD and glycosyl hydrolase, that the M2 seeds of the well germinating accessions are already in later stages of seed maturation or undergoing seed desiccation. Oleosin was higher abundant in seeds of the well germinating accessions in the M1 stage compared to the M2 stage Oleosins are proteins, which are associated with oil bodies in seeds. Oil bodies are important organelles within seeds and oleosins are the most dominant protein on the oil body surface followed by caleosins and steroleosins . Oleosins are specific to plants and allow the accumulation of neutral lipids that sustain seedlings during germination . They are associated with desiccation tolerance in tomato seeds as they were linked to protective functions within tomato seeds . Oleosins were found to accumulate during seed maturation in A. thaliana , however, the authors reported that for mRNA the OLE1 , OLE2 and OLE3 were most abundant during the ongoing maturation, but for the protein abundance they also found OLE5 to be high in abundance per seed at the end of maturation . This is in accordance with the data from this proteomic study, where the sequence identified was similar to oleosin 5 (spot 562; Soltu.DM.12G028510; oleosin 5-like). Following the Gene Onthology classification (GO classification) taken from the SpudDB database ( http://spuddb.uga.edu/index.shtml ), biolocial processes include post-embryonic development and reproduction (based upon the the TAIR database; https://www.arabidopsis.org/servlets/TairObject? type=locus&name=At3g01570). Within the TAIR database the expression based on the RNA-Seq data from Klepikova et al. (2016) indicate that this olesoin shows high abundance at the mature dry seeds stage of A. thaliana . As this protein was also only identified to be higher abundant for the well germinating accessions in the M1 stage of the seeds, but was not regulated in the low germinating accession, this might be an interesting candidate to analyse at different time points in the berry and seed development of S. scabrum .
S. scabrum accessions The stage of maturity had a significant effect ( p < 0.05), on seed germination for all accessions of African nightshade analysed in this study (Table ). These findings are in accordance with Tetteh et al. (2018) who reported that tomato seeds harvested at later stages of maturity germinated better than those harvested at earlier stages of maturity. The significantly higher germination percentages recorded in seeds harvested at the purple maturity stage (M2) may be due to the completion of the development of seed organs and maximum dry matter accumulation in the seeds as compared to the seeds harvested earlier at the mature green stage (M1). These results conform with Valdes et al. (1998) , who reported this also for tomato fruits. However, the nightshade seed germinability did not change during the storage. The Abuku 1 and Acc33 seeds displayed a high germination rate at the two maturity stages, the germination percentage observed for the two well-germinating accessions was above 85%, which is the recommended percentage for high-quality seeds . However, Olevolosi displayed significantly lower germination percentages at the two maturity stages and did not reach the value for high-quality seeds (in assay 1: 7.3% against 76.3% and 87.0% for Acc 33 and Abuku 1 at M1 and 67.3% against 98.7% and 98.0% for Acc 33 and Abuku 1, respectively at M2 stage, Table ). The well germinating accessions, Abuku 1 and Acc 33 recorded a higher germination percentage compared to Olevolosi. Berries of S. scabrum display a colour change from green to purple as berries advance in age (see Table , pictures for maturity stages). The protein data based on LEA proteins, MPLs and proteins found during seed maturation and drying stages also suggests, that seeds are more mature and start to ripen at the M2 stage (Fig. , Table , see further discussion). Similarly in tomatoes, it was shown that high-quality tomato seeds could be obtained from half-ripe and fully-ripe berry stages of different accessions based on the colour change of the berries (from green to red), leading to a high germination percentage and seedling emergence . In contrast, a study by Ahmed et al. (2018) confirmed that paprika berries harvested at the red ripe stage could give rise to superior quality seeds unlike those harvested at dark green and colour breaker stage. The superior seeds of fruits harvested at the red stage were attributed a physiological maturity of seeds which might be related to increased accumulation and assimilation of reserves from source to sink. Therefore, colour change might be a good visual criterion for farmers to detect seed maturity in S. scabrum . Nevertheless, the low germination rate of Olevolosi M1 might also be explainable through this, as other accessions might already be further along in their ripening process and therefore are displaying a higher germination percentage at M1 stage.
Seed storage proteins were the predominant protein group within the DAS found in the analysed African nightshade seeds (Figs. and ). Hay et al. (2017) reported that proteins accumulate significantly in the developing seed, whose main function was to act as a storage reserve for nitrogen, carbon, and sulphur and these proteins are rapidly mobilised during seed germination. Seed storage proteins were previously classified based on their solubility and are traditionally sorted into families and superfamilies . However, recently these proteins have been categorised into superfamilies based on sequence information and amino acid conservation. Four different types of seed storage proteins were identified in the analysed S. scabrum seeds. These included RmlC-like cupins superfamily protein, cruciferin, cupin family protein, and vicilin (Table ; Fig. ). In accordance with the findings of Koshiyama et al. (1983) , RmlC-like cupins superfamily protein was the most abundant in African nightshade seeds. These proteins belong to the 11–12 S globulins and are the most abundant seed storage proteins among higher plants that are synthesised during seed maturation on the mother plant . Despite RmlC-like cupins superfamily proteins being the most abundant storage proteins within the DAS in the African nightshade seeds, their role in seed maturity and development is not as well documented as their roles in other aspects of plant development and metabolism (Table ; Fig. ). However, the specific role of RmlC-like cupins in seed germination can vary depending on the plant species and the particular cupin protein in question. The functionally diverse superfamily can be divided in enzymatically active and non-active proteins . Also within the enzymatic active group of proteins, the functions are diverse, ranging from metal binding or sugar binding proteins to sugar isomerases (epimerases), oxalate oxidases (OXOs), superoxide dismutases (SODs) and many other functions . However, further investigations would have to take place to characterise the functions of the specific proteins found in this study. Cruciferin was the second most abundant seed storage protein within Olevolosi (4 spots in M1, 6 spots in M2), Acc 33 (2 spots in M1, 4 spots in M2), and Abuku 1 (3 spots in M1, 5 spots in M2) (Table ). Cruciferins are also one group of 11 S globulins, which are known to accumulate during the seed filling phase, e.g. in rape seeds , and serve as a source of nitrogen and amino acids for the germinating embryo . However, in mature dry seeds of Brassica napus , levels of cruciferin were hardly detectable and this was attributed to higher turnover of cruciferin in later seed development . This might indicate, that even in M2 stage, seeds of S. scabrum might still be metabolically active and not fully mature yet. However, this would have to be tested further. Other cupin family proteins were also detected in Olevolosi (1 spots in M1, 4 spots in M2), Acc 33 (0 spots in M1, 2 spots in M2), and Abuku 1 (3 spots in M1, 1 spots in M2), showing that these proteins were the most abundant seed storage proteins within the DAS analysed in S. scabrum seeds. Vicilin, however, was only found to be present in Acc 33 M1 and in Abuku 1 M2 seeds (Table ). Vicilin proteins are 7 S globulins and have diverse functions including but not limited to seed desiccation tolerance and plant defences against fungi and microbes , . In a proteomic analysis of germinating tomato seeds, these were found to be highly abundant in the embryo as well as the endosperm . It might therefore be interesting to analyse the changes in seed storage proteins during germination of S. scabrum seeds for a comparison to other Solanum species.
Maturation is an essential step in seed development that is characterised by a decline in reserve synthesis, acquisition of desiccation tolerance, and dormancy. The heat map based upon abundance of the normalised spot volume M2/M1 for the interesting spots (excluding seed storage proteins) showed changes as seeds transitioned from M1 to M2 stage (Fig. , see also Fig. and GelMap online ( www.gelmap.de/2700 )). Several proteins were identified, which are known to be of importance as seeds progress in maturation. Proteins belonging to this category included late embryogenesis abundant (LEA) family proteins as well as MLP-like proteins (major latex proteins). LEAs were highly abundant in the seeds harvested at the ripe stage (Olevolosi and Acc33) and were not detected based on regulation in the seeds harvested at the mature green stage. However, they were highly abundant for the accession Olevolosi at M2 (spots 79, 91, 92, 166, 169, 175, 330 and 333, M2/M1 ratio: 1.65–44.25) and were also found in Acc 33 (spots 79, 175 and 330, M2/M1 ratio: 1.92–6.36), but were not identified within the DAS in Abuku 1 seeds. LEA proteins accumulate during late seed developmental stages and play an important role during seed drying as they confer desiccation tolerance of seeds , . However, further time points during seed ripening would have to be analysed. Major latex-like proteins (MLP-like proteins) were also identified as DAS in all accessions (spots 203, 204 and 553). Nonetheless, only in the well germinating accessions were these proteins also found in higher abundance at the M1 stage (spot ID 553) than in M2. One MLP-like protein (MLP-like protein 43) was identified to be of major importance for drought tolerance. A knockout-mutant of Arabidopsis thaliana was sensitive to drought, whereas overexpression lines were shown to be drought tolerant . Therefore, an increase in MLP-like protein in later stages of seed maturation might also confer desiccation tolerance in the seeds of the analysed accessions. As an MLP-like protein was already identified within the DAS in the well germinating accession in the M1 phase, this might indicate, that the well germinating accessions were already in a later stage of seed maturation, namely seed desiccation.
Phosphoglycerate kinase (PGK; E.C. 2.7.2.3) was identified in DAS of M1 seeds of Acc33 (spots 130, 131, 421) as well as in Olevolosi M2 seeds (spots 130,131, 421). Plant genomes contain 3 or more PGK genes (Fig. , see also Fig. and GelMap online ( www.gelmap.de/2700 )). PGK is an enzyme involved not only in photosynthesis, but also plays a major role in glycolysis and gluconeogenesis . It was reported, that in seeds of e.g. Glycine max and Brassica napus the abundance of PGK decreased with commencing seed maturation. Agrawal et al. (2008) stated that in the stage of seed drying, PGK abundance was very low. As a higher abundance of PGK was identified in Acc33 M1, but for the low germinating accession Olevolosi it was higher abundant in the M2 seeds, this might also display a difference in timing of seed maturations between the accessions with Oleovolosi reaching maturity later. The identification of glutamate decarboxylase (GAD) also underlines the hypothesis, that Olevolosi seeds might take longer or are delayed in ripening. GAD in seeds acts as a metabolic link between carbon and nitrogen metabolism by catalysing the unidirectional decarboxylation of glutamate to form γ-aminobutyric acid (GABA) . In African nightshade seeds, GAD was found to be higher abundant in Abuku 1 in the M1 stage and in Olevolosi in seeds of the M2 stage. Angelovici et al. (2010) recorded a shift in Arabidopsis thaliana seeds during transition from reserve accumulation stage to desiccation from a general decrease in unbound metabolites to the accumulation of a set of specific metabolites, including γ-aminobutyric acid. Furthermore, Fait et al. (2011) reported that GABA content is differentially regulated during the late seed maturation-to-desiccation stage, and its accumulation is indicative of a shift toward N metabolism. In Solanum lycopersicum berries, up-regulation of the GABA shunt led to the alteration of storage reserve accumulation and fatty acid metabolism, which are linked to seed filling . However, it was reported, that in tomato fruits GABA was increased after flowering, reached a peak at the mature green berry stage and rapidly declined after the breaker stage , , . However, if this holds true for fruits of S. scabrum has to be further analysed. Taken together, the low germination in Olevolosi could be attributed to the delayed maturity in this accession as seen in the difference in the abundance of PGK and GAD which are associated with the late stages of seed maturation and that maturity can not be identified by berry colour alone in this accession. Glycosyl hydrolase family proteins are diverse in function. However, they can function as glucosidases (EC 3.2.1.39), which are associated with germination and seed maturation . Sequence analysis revealed, that Soltu.DM.06G029160.1 (spots 60, 510, 512 and 454) include putative glucosidases. A previous study in Arabidopsis showed that the expression of some genes in the glycosyl hydrolase family were detected in dry seeds and induced upon germination . In the well germinating accessions Acc 33 and Abuku 1, glycosyl hydrolase family proteins these were highly abundant for seeds in the M1 stage. However, for the low germinating accession Olevolosi glycosyl hydrolases were also identified in the M2 stage. However, due to the scarce information on these proteins regarding the stages of seed maturation, further experiments would be needed to link these to the maturation stages in S. scabrum . The well germinating accession displayed unique responses for higher abundant proteins in seeds of the M1 stage (Figs. and and GelMap online ( www.gelmap.de/2700 )). The eukaryotic translation factor 4A1 (eIF4A1) was found to be higher abundant only in the well-germinating accessions in M1 seeds. This protein is fundamental in gene expression. It is an ATP-dependant RNA helicase and it plays a major function in unwinding RNA secondary structure, leading to ribosomal binding and translation . Further, these proteins stimulate stress-induced pathways that mediate salinity stress tolerance . The overexpression of translation factor 4 A in peanut ( Arachis hypogaea ) was found to improve drought, salinity, and oxidative stress tolerance . However, as eIF4A1 was only found to be higher abundant in M1 seeds, this might hint to a decline in translation activity in the M2 seeds, which would correlated with the previous described findings regarding PGK, GAD and glycosyl hydrolase, that the M2 seeds of the well germinating accessions are already in later stages of seed maturation or undergoing seed desiccation.
Oleosins are proteins, which are associated with oil bodies in seeds. Oil bodies are important organelles within seeds and oleosins are the most dominant protein on the oil body surface followed by caleosins and steroleosins . Oleosins are specific to plants and allow the accumulation of neutral lipids that sustain seedlings during germination . They are associated with desiccation tolerance in tomato seeds as they were linked to protective functions within tomato seeds . Oleosins were found to accumulate during seed maturation in A. thaliana , however, the authors reported that for mRNA the OLE1 , OLE2 and OLE3 were most abundant during the ongoing maturation, but for the protein abundance they also found OLE5 to be high in abundance per seed at the end of maturation . This is in accordance with the data from this proteomic study, where the sequence identified was similar to oleosin 5 (spot 562; Soltu.DM.12G028510; oleosin 5-like). Following the Gene Onthology classification (GO classification) taken from the SpudDB database ( http://spuddb.uga.edu/index.shtml ), biolocial processes include post-embryonic development and reproduction (based upon the the TAIR database; https://www.arabidopsis.org/servlets/TairObject? type=locus&name=At3g01570). Within the TAIR database the expression based on the RNA-Seq data from Klepikova et al. (2016) indicate that this olesoin shows high abundance at the mature dry seeds stage of A. thaliana . As this protein was also only identified to be higher abundant for the well germinating accessions in the M1 stage of the seeds, but was not regulated in the low germinating accession, this might be an interesting candidate to analyse at different time points in the berry and seed development of S. scabrum .
The present study revealed that African nightshade seeds of high quality can be obtained from berries harvested at the purple ripe stage irrespective of the accession. However, for Olevolosi further indicators for maturity need to be developed, since significantly lower germination percentages were observed for Olevolosi than the commercially recommended germination rate, which was reached for Acc 33 and Abuku 1 in the M2 stage. There was a number of differentially abundant proteins between the two stages of maturity; the mature green stage and the ripe stage, indicating metabolic differences between the two stages e.g. LEAs, MLP-like proteins, PGK, GAD, glycosyl hydrolase, eIF4A1 and oleosins. This may suggest that seeds harvested from berries of African nightshade accessions at the mature green stage were still within the early maturation phase (accumulation of storage reserves), especially in the low germinating accession, while seeds harvested from purple ripe berries were already in a later stage, i.e. the seed desiccation, in case of the two well germinating accessions. Through the proteomic analysis differences between the well and the low germinating accessions were visible, which lead to the conclusion, that the low germinating accession Olevolosi was delayed in its maturation. These findings give first proteomic insights into the seeds of this orphan crop. Further Omics methods such as metabolomics as well as increasing the number of accessions under investigation would be interesting to check seed maturation in S. scabrum to support the findings. The presented data suggests that proteomic studies might also be of major importance for interesting orphan crops such as S. scabrum to determine optimal maturity stage of seeds for harvesting and to better understand accession-dependent variation in germination rate. Moreover, the software GelMap was expanded and now contains a reference proteome map for S. scabrum , which is open to be used by the scientific community to deepen seed protein knowledge in this African leafy vegetable.
Below is the link to the electronic supplementary material. Supplementary Material 1 Supplementary Material 2 Supplementary Material 3 Supplementary Material 4 Supplementary Material 5 Supplementary Material 6 Supplementary Material 7
|
Prevalence and determinants of maternal near miss in Ethiopia: a systematic review and meta-analysis, 2023 | 03f27dfc-d607-4f77-b288-0bd3b6a35635 | 11889848 | Community Health Services[mh] | Maternal near miss refers to a critical condition wherein a woman comes close to death due to pregnancy or childbirth complications within 42 days after the termination of pregnancy, regardless of the place or length of the pregnancy, but survives either due to the care provided or by chance. When assessing obstetric care, maternal near miss (MNM) is a more valuable metric than maternal mortality as it provides a more insightful measurement . The occurrence of maternal near miss is more frequent compared to maternal mortality. Several studies indicate that maternal near misses are 15 times more prevalent than maternal deaths . Another study conducted in low-resource countries demonstrated that the prevalence of maternal near-misses was 26 times higher than that of maternal deaths . Globally, approximately 140 million births occur annually. The impact of pregnancy-related complications on women’s lives remains substantial, particularly in developing countries worldwide . More than 99% of maternal deaths transpire in low- and middle-income nations, primarily due to severe poverty, which hinders women’s access to adequate healthcare and education . Due to substantial endeavors made during the Millennium Development Goals (MDG) era to combat maternal mortality, the maternal mortality ratio has witnessed a decline of 44% . Despite the progress made, the persistently high levels of maternal mortality and morbidity remain concerning, particularly considering that 99% of maternal deaths occur in developing countries. In impoverished nations, the likelihood of maternal death is 1 in 41 live births, while in developed nations, it is 1 in 3300 live births. Additionally, for every woman who dies, approximately 20 more women suffer from acute and chronic complications related to pregnancy and childbirth . Ethiopia is among the sub-Saharan African nations facing a significant maternal mortality challenge. According to the Ethiopian Demographic and Health Survey (EDHS, 2016), the maternal mortality rate (MMR) in Ethiopia stands at 412 per 100,000 live births. Disturbingly, for every maternal death, 10–15% of women encounter pregnancy-related complications . Ethiopia is among a group of five nations that contribute to 50% of maternal fatalities on a global scale . Every year, approximately 20,000 women lose their lives due to complications arising from pregnancy and childbirth . In recent decades, there has been global advancement in reducing maternal and infant mortality rates. Since 1990, there has been a notable decrease in maternal and infant deaths. Nevertheless, the reduction in maternal mortality has been comparatively slower, and despite the significant decline, it falls short of the 1990 Millennium Development Goals (MDGs) target of 75% . The World Health Organization (WHO) report indicates that the current global rate of maternal mortality ratio is 216 deaths per 100,000 live births . The main factors associated with this high MNM were: history of chronic medical disorder [ – ], rural residency [ – , ], no antenatal care attendance [ – ], age of respondent , not having formal education or having low educational levels , source of referral [ – ], having a history of chronic hypertension and anemia and, having a previous caesarian section and/or abortion were significantly associated with MNM . Furthermore, exploring the underlying factors behind severe maternal morbidity (MNM) offers valuable insights for healthcare practitioners in delivering high-quality maternity care by improving facility preparedness. Moreover, investigating MNM incidents, as opposed to maternal deaths, presents several advantages. MNM cases are more prevalent than maternal mortality cases, allowing for a larger pool of tangible evidence to understand the pathways leading to severe maternal morbidity. Since the women involved in these cases have survived, examining the care they received is less emotionally distressing for healthcare providers. Additionally, the women themselves can provide valuable firsthand accounts, serving as witnesses to help us learn from their experiences. Lastly, every near-miss case offers a valuable opportunity to learn and enhance maternity care without any cost, providing a platform for continuous improvement . Several primary studies have been conducted [ , , – ] on the prevalence of near-miss cases, and determining factors were conducted in Ethiopia at the time this study was initiated, but the inconsistent nature of the reports presents challenges for healthcare programs and clinical decision-making. Thus, it was deemed vital to conduct this research to verify this finding and provide strong evidence for clinical decision-making or health programs. Considering the diversity in demographic, obstetric, and medical attributes of women, as well as factors like educational background and history of antenatal care (ANC) follow-up, these variables play a significant role in the provision of care during pregnancy and childbirth. Analyzing cases of maternal near-misses enables the evaluation of effective interventions and identification of shortcomings in the healthcare system and offers alternative approaches to decrease maternal mortality . While there is ample evidence available in this field, the outcomes of studies display variability, making it prudent to consolidate the evidence through synthesis. Consequently, the objective of this systematic review and meta-analysis is to assess the overall prevalence of MNM and identify the factors that contribute to them among women in Ethiopia. The results of this study will provide valuable guidance for policymakers and other stakeholders in developing and executing strategies to reduce occurrences of maternal near-misses.
Study design and protocol To examine the prevalence of maternal near-miss cases in Ethiopia, a systematic review and meta-analysis were carried out. The study strictly followed the Preferred Reporting Items for Systematic Review and Meta-Analysis (PRISMA) guidelines. These guidelines include checklists that offer guidance on conducting and reporting systematic reviews and meta-analyses in a standardized manner (S1 File). This study was registered with the Prospective International Register of Systematic Reviews (PROSPERO, number CRD42023485844. Ethiopia, a low-income nation situated in the Horn of Africa, is anticipated to have a population of 123.4 million in 2022, 133.5 million in 2032, and 171.8 million in 2050. Administratively, Ethiopia is divided into 11 regions and two city administrations. The regions are further segmented into zones, and zones are then subdivided into districts. Study selection The identified studies were imported into reference management software, specifically Endnote version 8, to eliminate duplicate studies. Two researchers independently assessed the selected studies based on their titles and abstracts to determine their relevance. Full-text papers were retrieved for further evaluation, following pre-defined inclusion criteria. Any disagreements that arose during the screening process were resolved through a consensus meeting involving other reviewers, MW and MB. Eligibility criteria Inclusion and exclusion criteria This review included observational studies, including cross-sectional, case–control, and cohort studies. The inclusion criteria encompassed studies conducted in Ethiopia and published in English that reported the prevalence of MNM and/or identified at least one determinant. Unpublished works on MNM were also taken into consideration. Citations lacking an abstract and/or full text, anonymous reports, editorials, and qualitative studies were excluded from the analysis. Additionally, studies that did not report the outcomes relevant to our research objectives were also excluded. Our focus was specifically on identifying observational studies, including case–control and cross-sectional designs, that examined the prevalence or proportion of failed induction and its related factors. The study period considered for inclusion ranged from January 1, 2016, to September 23, 2023 (Fig. ). Searching strategy data source The databases of PubMed, Scopus, the Cochrane Library, and Google Scholar were searched for relevant studies. We utilized MeSH terms, keywords, and combinations thereof to refine the search. Additionally, we employed snowball searching techniques by examining the reference lists of retrieved papers to identify any additional relevant articles. To ensure a comprehensive search, unpublished studies were also sourced from official websites of international and local organizations, as well as university repositories. The search strategy involved the use of keywords and medical subject heading (MeSH) terms, with combinations of “OR” and “AND” operators. Key search terms included “maternal,” “near miss,” “obstetric complications,” “pregnancy,” “maternal death,” “respiratory infection,” “causes,” “risk factors,” “determinants,” “associated factors,” “predictors,” and “Ethiopia.” Various Boolean operators were employed to develop the search strategies. Notably, for the PubMed database, the following search strategy was utilized: prevalence OR magnitude OR epidemiology; AND (causes OR determinants OR associated factors OR predictors OR risk factors; AND maternal near miss [MeSH Terms] OR childbirth OR child OR childhood) AND Ethiopia. Additionally, we screened the reference lists of selected papers to identify any further relevant studies for inclusion in this review. Identification and study selection All the identified studies were imported into the Endnote X8 reference manager software, and any duplicate articles were removed. The screening process involved evaluating the titles and abstracts of the studies. Three authors together screened and assessed the articles. The full text of the selected studies was then evaluated based on their objectives, methodology, participants/population, and key findings related to maternal near miss. In case of any disagreements during the screening process, a consensus meeting was held involving other senior reviewers to resolve them. Data extraction An Excel sheet was developed by the authors to create a data extraction form, which consisted of fields such as author name, year of publication, region, study design, sample size, prevalence of MNM, and reported determinant factors. To ensure the effectiveness of the data extraction form, a pilot test was conducted using four randomly selected papers. Following the pilot phase, adjustments were made to the extraction form template. Subsequently, two authors collaborated to extract the data using the revised extraction form. The third and fourth authors independently verified the accuracy of the extracted data. In cases where there were discrepancies between the reviewers, discussions took place involving a third and fourth reviewer to reach a consensus. To minimize errors in data entry, cross-checking with the included papers was performed to rectify any mistyping or inaccuracies. Quality assessment The evaluation of article quality was carried out using the Joanna Briggs Institute’s (JBI) quality appraisal checklist. The Joanna Briggs Institute’s (JBI) quality appraisal checklist score is 1 for “yes,” 0 for “no,” and U for “unclear.” The final Scores for each study were summed and transformed into a percentage. Finally, the ranking was given as follows: ≤ 49% = high risk of bias, 50–69% = moderate risk of bias, and above 70% = low risk of bias. Only studies that scored ≥ 50% were considered in this systemic review and meta-analysis. In the case of ongoing disputes between reviewers, the average ratings of the reviewers were computed. The quality of the primary study results was recorded in a separate column in the data extraction form. This meticulous process ensured that the quality assessment was conducted rigorously and comprehensively, incorporating diverse perspectives and the expertise of the author team. Four independent authors were assigned to assess the quality of the studies, each responsible for evaluating them individually. The assessment encompassed various aspects such as methodological quality, sample selection, sample size, comparability, outcome assessment, and statistical analysis of the study. To ensure thoroughness and comprehensiveness, the appraisal process involved multiple rounds where authors exchanged assessments with each other. Consequently, each paper was appraised by two authors. In the event of disagreements, discussions took place, and a senior author was consulted for resolution. This meticulous process guaranteed that the quality assessment was conducted with rigor and a comprehensive approach, taking into account diverse perspectives and the expertise of the author team (Supplementary Table 1). Outcome of measurement The primary focus of this systematic review and meta-analysis was to examine maternal near-miss as the primary outcome. MNM refers to the condition of a critically ill pregnant or recently delivered woman who experienced a severe complication during pregnancy, childbirth, or within 42 days after the termination of pregnancy but managed to survive . The second outcome of this study aimed to identify the determinate of MNM. The goal was to examine the factors that may contribute to the occurrence of MNM. The systematic review and meta-analysis sought to analyze and summarize the available evidence on these determinate factors to provide a comprehensive understanding of their influence on MNM. Statistical analysis Once the data was extracted in Microsoft Excel format, it was imported into STATA version 14.0 statistical software for further analysis. The standard error for each study was calculated using the binomial distribution formula. To determine the overall estimates of the magnitude of MNM (maternal near miss), a random effect meta-analysis was conducted by pooling the data. The pooled prevalence of MNM, along with a 95% confidence interval (CI), was presented using forest plots. Similarly, forest plots were used to present the odds ratio (OR) with a 95% CI to illustrate the determinants of MNM. To assess the heterogeneity among the studies, Cochrane’s Q statistics (chi-square), inverse variance ( I 2 ), and p -values were employed. In this study, an I 2 value of zero indicated true homogeneity, while values of 25, 50, and 75% denoted low, moderate, and high heterogeneity, respectively. For data identified as heterogeneous, a random-effects model analysis was utilized. Additionally, subgroup analysis was performed based on the study region and design. Sensitivity analysis was conducted to evaluate the impact of individual studies on the overall estimation. Publication bias was assessed through the funnel plot and, more objectively, using Egger’s regression test. Subgroup analyses To investigate potential variations in the prevalence of (MNM) within Ethiopia, subgroup analyses were conducted based on the study region and study design. The purpose of these analyses was to assess whether the prevalence estimates differed significantly across different geographical areas and the study design employed. Publication bias and heterogeneity Comprehensive and thorough searches, including electronic/database searches and manual searches, were conducted to minimize bias risks. The authors’ collaborative efforts played a crucial role in reducing bias by adhering to clear objectives and eligibility criteria, evaluating the quality of studies, and extracting and compiling the data. Publication bias was assessed through a visual inspection of the funnel plot graph, providing a qualitative evaluation. Additionally, Egger’s correlation tests were conducted at a significance level of 5% to further assess the presence of publication bias. Another aspect considered was the sensitivity analysis, which aimed to evaluate the stability and robustness of the pooled estimates in the presence of outliers and the potential influence of individual studies on the overall results. This analysis involved systematically excluding one study at a time and re-analyzing the data to understand the impact of specific studies on the pooled estimates and overall conclusions of the systematic review and meta-analysis. By performing sensitivity analysis, a more comprehensive understanding of the potential effects of individual studies on the pooled estimates and the overall findings of the study could be obtained.
To examine the prevalence of maternal near-miss cases in Ethiopia, a systematic review and meta-analysis were carried out. The study strictly followed the Preferred Reporting Items for Systematic Review and Meta-Analysis (PRISMA) guidelines. These guidelines include checklists that offer guidance on conducting and reporting systematic reviews and meta-analyses in a standardized manner (S1 File). This study was registered with the Prospective International Register of Systematic Reviews (PROSPERO, number CRD42023485844. Ethiopia, a low-income nation situated in the Horn of Africa, is anticipated to have a population of 123.4 million in 2022, 133.5 million in 2032, and 171.8 million in 2050. Administratively, Ethiopia is divided into 11 regions and two city administrations. The regions are further segmented into zones, and zones are then subdivided into districts.
The identified studies were imported into reference management software, specifically Endnote version 8, to eliminate duplicate studies. Two researchers independently assessed the selected studies based on their titles and abstracts to determine their relevance. Full-text papers were retrieved for further evaluation, following pre-defined inclusion criteria. Any disagreements that arose during the screening process were resolved through a consensus meeting involving other reviewers, MW and MB.
Inclusion and exclusion criteria This review included observational studies, including cross-sectional, case–control, and cohort studies. The inclusion criteria encompassed studies conducted in Ethiopia and published in English that reported the prevalence of MNM and/or identified at least one determinant. Unpublished works on MNM were also taken into consideration. Citations lacking an abstract and/or full text, anonymous reports, editorials, and qualitative studies were excluded from the analysis. Additionally, studies that did not report the outcomes relevant to our research objectives were also excluded. Our focus was specifically on identifying observational studies, including case–control and cross-sectional designs, that examined the prevalence or proportion of failed induction and its related factors. The study period considered for inclusion ranged from January 1, 2016, to September 23, 2023 (Fig. ).
This review included observational studies, including cross-sectional, case–control, and cohort studies. The inclusion criteria encompassed studies conducted in Ethiopia and published in English that reported the prevalence of MNM and/or identified at least one determinant. Unpublished works on MNM were also taken into consideration. Citations lacking an abstract and/or full text, anonymous reports, editorials, and qualitative studies were excluded from the analysis. Additionally, studies that did not report the outcomes relevant to our research objectives were also excluded. Our focus was specifically on identifying observational studies, including case–control and cross-sectional designs, that examined the prevalence or proportion of failed induction and its related factors. The study period considered for inclusion ranged from January 1, 2016, to September 23, 2023 (Fig. ).
The databases of PubMed, Scopus, the Cochrane Library, and Google Scholar were searched for relevant studies. We utilized MeSH terms, keywords, and combinations thereof to refine the search. Additionally, we employed snowball searching techniques by examining the reference lists of retrieved papers to identify any additional relevant articles. To ensure a comprehensive search, unpublished studies were also sourced from official websites of international and local organizations, as well as university repositories. The search strategy involved the use of keywords and medical subject heading (MeSH) terms, with combinations of “OR” and “AND” operators. Key search terms included “maternal,” “near miss,” “obstetric complications,” “pregnancy,” “maternal death,” “respiratory infection,” “causes,” “risk factors,” “determinants,” “associated factors,” “predictors,” and “Ethiopia.” Various Boolean operators were employed to develop the search strategies. Notably, for the PubMed database, the following search strategy was utilized: prevalence OR magnitude OR epidemiology; AND (causes OR determinants OR associated factors OR predictors OR risk factors; AND maternal near miss [MeSH Terms] OR childbirth OR child OR childhood) AND Ethiopia. Additionally, we screened the reference lists of selected papers to identify any further relevant studies for inclusion in this review.
All the identified studies were imported into the Endnote X8 reference manager software, and any duplicate articles were removed. The screening process involved evaluating the titles and abstracts of the studies. Three authors together screened and assessed the articles. The full text of the selected studies was then evaluated based on their objectives, methodology, participants/population, and key findings related to maternal near miss. In case of any disagreements during the screening process, a consensus meeting was held involving other senior reviewers to resolve them.
An Excel sheet was developed by the authors to create a data extraction form, which consisted of fields such as author name, year of publication, region, study design, sample size, prevalence of MNM, and reported determinant factors. To ensure the effectiveness of the data extraction form, a pilot test was conducted using four randomly selected papers. Following the pilot phase, adjustments were made to the extraction form template. Subsequently, two authors collaborated to extract the data using the revised extraction form. The third and fourth authors independently verified the accuracy of the extracted data. In cases where there were discrepancies between the reviewers, discussions took place involving a third and fourth reviewer to reach a consensus. To minimize errors in data entry, cross-checking with the included papers was performed to rectify any mistyping or inaccuracies.
The evaluation of article quality was carried out using the Joanna Briggs Institute’s (JBI) quality appraisal checklist. The Joanna Briggs Institute’s (JBI) quality appraisal checklist score is 1 for “yes,” 0 for “no,” and U for “unclear.” The final Scores for each study were summed and transformed into a percentage. Finally, the ranking was given as follows: ≤ 49% = high risk of bias, 50–69% = moderate risk of bias, and above 70% = low risk of bias. Only studies that scored ≥ 50% were considered in this systemic review and meta-analysis. In the case of ongoing disputes between reviewers, the average ratings of the reviewers were computed. The quality of the primary study results was recorded in a separate column in the data extraction form. This meticulous process ensured that the quality assessment was conducted rigorously and comprehensively, incorporating diverse perspectives and the expertise of the author team. Four independent authors were assigned to assess the quality of the studies, each responsible for evaluating them individually. The assessment encompassed various aspects such as methodological quality, sample selection, sample size, comparability, outcome assessment, and statistical analysis of the study. To ensure thoroughness and comprehensiveness, the appraisal process involved multiple rounds where authors exchanged assessments with each other. Consequently, each paper was appraised by two authors. In the event of disagreements, discussions took place, and a senior author was consulted for resolution. This meticulous process guaranteed that the quality assessment was conducted with rigor and a comprehensive approach, taking into account diverse perspectives and the expertise of the author team (Supplementary Table 1).
The primary focus of this systematic review and meta-analysis was to examine maternal near-miss as the primary outcome. MNM refers to the condition of a critically ill pregnant or recently delivered woman who experienced a severe complication during pregnancy, childbirth, or within 42 days after the termination of pregnancy but managed to survive . The second outcome of this study aimed to identify the determinate of MNM. The goal was to examine the factors that may contribute to the occurrence of MNM. The systematic review and meta-analysis sought to analyze and summarize the available evidence on these determinate factors to provide a comprehensive understanding of their influence on MNM.
Once the data was extracted in Microsoft Excel format, it was imported into STATA version 14.0 statistical software for further analysis. The standard error for each study was calculated using the binomial distribution formula. To determine the overall estimates of the magnitude of MNM (maternal near miss), a random effect meta-analysis was conducted by pooling the data. The pooled prevalence of MNM, along with a 95% confidence interval (CI), was presented using forest plots. Similarly, forest plots were used to present the odds ratio (OR) with a 95% CI to illustrate the determinants of MNM. To assess the heterogeneity among the studies, Cochrane’s Q statistics (chi-square), inverse variance ( I 2 ), and p -values were employed. In this study, an I 2 value of zero indicated true homogeneity, while values of 25, 50, and 75% denoted low, moderate, and high heterogeneity, respectively. For data identified as heterogeneous, a random-effects model analysis was utilized. Additionally, subgroup analysis was performed based on the study region and design. Sensitivity analysis was conducted to evaluate the impact of individual studies on the overall estimation. Publication bias was assessed through the funnel plot and, more objectively, using Egger’s regression test.
To investigate potential variations in the prevalence of (MNM) within Ethiopia, subgroup analyses were conducted based on the study region and study design. The purpose of these analyses was to assess whether the prevalence estimates differed significantly across different geographical areas and the study design employed.
Comprehensive and thorough searches, including electronic/database searches and manual searches, were conducted to minimize bias risks. The authors’ collaborative efforts played a crucial role in reducing bias by adhering to clear objectives and eligibility criteria, evaluating the quality of studies, and extracting and compiling the data. Publication bias was assessed through a visual inspection of the funnel plot graph, providing a qualitative evaluation. Additionally, Egger’s correlation tests were conducted at a significance level of 5% to further assess the presence of publication bias. Another aspect considered was the sensitivity analysis, which aimed to evaluate the stability and robustness of the pooled estimates in the presence of outliers and the potential influence of individual studies on the overall results. This analysis involved systematically excluding one study at a time and re-analyzing the data to understand the impact of specific studies on the pooled estimates and overall conclusions of the systematic review and meta-analysis. By performing sensitivity analysis, a more comprehensive understanding of the potential effects of individual studies on the pooled estimates and the overall findings of the study could be obtained.
Literature search findings The first database search discovered 540 articles. After removing duplicates, there were 388 distinct articles. Following the screening of titles and abstracts, 321 articles were excluded based on titles and 23 based on abstracts. The remaining articles were subject to a detailed full-text evaluation to determine their eligibility for inclusion. 6 studies were excluded due to differing outcome estimates, 18 because the outcome of interest was not reported, and an additional 2 papers were excluded due to the inaccessibility of the full text. As a result, a total of 13 studies were included in the final analysis (Fig. ). Characteristics of included studies Supplementary Table 2 summarizes the characteristics of the 1 3 included studies in the systematic review and meta-analysis, of which studies were from the Amhara region [ , , ], from the Oromia region [ , , ], studies were from the SNNP region , studies from the Harar region , study from the Tigray region , studies from Addis Abeba city. Four studies were cross-sectional, eight studies were case–control, and the other wasa cohort study. The studies included participants ranging from 183 to 29,697 (Supplementary Table 2). Prevalence of MNM in Ethiopia Some of the studies ( n = 6) reported the prevalence of MNM [ , , , , , ]. The prevalence of MNM ranged from 0.8% up to 28.7% . The random-effects model analysis from those studies revealed that the pooled prevalence of MNM in Ethiopia was found to be 12.9% (95% CI; 6.30–19.49; I 2 = 98.3%; p < 0.001) (Fig. ). Publication bias A funnel plot showed asymmetrical distribution. Egger’s regression test p -value was 0.036, which indicated the presence of publication bias (Fig. ). Subgroup analysis of the MNM in Ethiopia The prevalence of MNM was examined through subgroup analysis, stratifying by study region and study design. The findings revealed a prevalence of 24.85 in SNNPR, 16.19 in Oromia, 15.8 in Amhara, 9.21 in Harar, and 0.8 in Addis Ababa (Fig. ). Additionally, based on the study design, the prevalence of MNM was determined as 17.17 in cross-sectional studies, 9.21 in cohort studies, and 4.49 in case–control studies (Fig. ). In order to identify potential sources of heterogeneity in the analysis of MNM prevalence in Ethiopia, a leave-one-out sensitivity analysis was conducted. The results indicated that the findings were not reliant on a single study, ensuring the robustness of the overall conclusions. To assess publication bias, a funnel plot was examined, demonstrating a symmetrical distribution. Furthermore, the p -value from Egger’s regression test was 0.63, indicating the absence of publication bias (Supplementary Fig. 1). Factors associated with MNM History of cesarean section Out of the six studies analyzed, a notable correlation was observed between a history of cesarean sections and maternal near miss (MNM). Among these studies, the highest risk-adjusted odds ratio (AOR) was 7.68 (95% confidence interval: 5.69, 9.67) , and the lowest risk factor, AOR = 3.53 (2.22, 4.48), Dessalegn FN et al. as compared to those who had no history of cesarean section. In terms of testing for heterogeneity, the Galbraith plot indicated homogeneity. Combining the results from six studies, the forest plot displayed an overall estimated adjusted odds ratio (AOR) of 4.40 (95% confidence interval: 3.51, 5.28; I 2 = 64.4%; P = 0.0013), indicating moderate heterogeneity (Fig. ). The I -squared ( I 2 ) value and p -value also supported the presence of homogeneity. When assessing publication bias, the funnel plot exhibited a symmetrical distribution. However, during Egger’s regression test, the p -value was 0.049, suggesting the presence of publication bias (Supplementary Fig. 2). To identify potential sources of heterogeneity in the pooled estimate analysis regarding the association of a history of cesarean section as a risk factor for MNM in Ethiopia, a leave-one-out sensitivity analysis was conducted. The results of this analysis demonstrated that the findings were not reliant on a single study. The pooled estimate of having a history of cesarean section ranged from 3.99 (95% CI: 3.47–4.52) to 4.51 (95% CI: 3.42–5.60). Lack of antenatal visit Among the nine studies examined, a significant association was identified between a lack of antenatal care (ANC) visits and maternal near-misses. The highest risk factor was reported by Lemi K et al., with an adjusted odds ratio (AOR) of 6.02 (95% confidence interval: 3.69, 8.35) . On the other hand, the lowest risk factor was found with an AOR of 0.76 (95% confidence interval: 0.16, 1.36) Kasahun AW, Wako WG . The forest plot displayed an overall estimated adjusted odds ratio (AOR) of 3.09 (95% confidence interval: 2.12–4.05; I 2 = 86.6%; P = 0.00) for the association between a lack of antenatal care (ANC) visits and maternal near-misses. The I -squared ( I 2 ) value and p -value also indicated homogeneity among the studies (Fig. ). In terms of publication bias, the funnel plot exhibited an asymmetrical distribution. The p -value from Egger’s regression test was 0.002, suggesting the presence of publication bias. To identify potential sources of heterogeneity in the pooled estimate analysis regarding the lack of ANC visits as a risk factor for maternal near-misses in Ethiopia, a leave-one-out sensitivity analysis was conducted (Supplementary Fig. 3). Having chronic comorbidity Out of the seven studies included in the study, a significant association was observed between having chronic comorbidity and MNMs. Among these studies, the highest risk-adjusted odds ratio (AOR) was 12.10 (95% confidence interval: 8.61, 15.59). Teshome HN et al. and the lowest risk factor had an AOR of 2.04 (1.22, 2.86) Dessalegn FN et al. in comparison to individuals without chronic comorbidity, individuals with chronic comorbidity were found to have a significant association with MNMs. The Galbraith plot indicated homogeneity, and when combining the results of seven studies, the forest plot displayed an overall estimated adjusted odds ratio (AOR) of 4.70 (95% confidence interval: 2.97–6.42; I 2 = 93.2%; P = 0.00), indicating substantial heterogeneity (Fig. ). The I -squared ( I 2 ) value and p -value also supported the presence of heterogeneity. When examining publication bias, the funnel plot demonstrated a symmetrical distribution. However, during Egger’s regression test, the p -value was 0.001, suggesting the presence of publication bias (Supplementary Fig. 4). To identify potential sources of heterogeneity in the pooled estimate analysis regarding chronic comorbidity as a risk factor for maternal near-misses in Ethiopia, a leave-one-out sensitivity analysis was conducted. The results of this study indicated that the findings were not reliant on a single study. The pooled estimate of chronic comorbidity ranged from 3.42 (95% CI: 2.22–4.62) to 5.28 (95% CI: 3.18–7.39) when no studies were excluded (Supplementary Fig. 5). Rular residence Eleven studies found a significant association between ruler residency and MNMs. Among these studies, the highest risk-adjusted odds ratio (AOR) was = 13.00 (10.96, 15.04) in Liyew et al. and the lowest is AOR = 0.01 (− 0.86, 0.88) in Asaye MM . In terms of testing for heterogeneity, the Galbraith plot indicated homogeneity. Combining the results from seven studies, the forest plot displayed an overall estimated adjusted odds ratio (AOR) of 1.71 (95% confidence interval: 0.93–2.49; I 2 = 94.2%; P = 0.00) for the association between rural residency and MNMs. The I -squared ( I 2 ) value confirmed the presence of substantial heterogeneity (Fig. ). When assessing publication bias, the funnel plot exhibited a symmetrical distribution. However, during Egger’s regression test, the p -value was 0.008, suggesting the presence of publication bias (Supplementary Fig. 6). To identify potential sources of heterogeneity in the pooled estimate analysis regarding rural residency as a risk factor for MNMs in Ethiopia, a leave-one-out sensitivity analysis was conducted. The results of this analysis indicated that the findings were not reliant on a single study (Supplementary Fig. 7). Mode of admission Six studies found a significant association between referred from other health facilities and MNMs. Among these studies, the highest risk-adjusted odds ratio (AOR) was 7.47 (5.11, 9.83) Kasahun AW, Wako WG , and the lowest risk factor has an AOR of 0.41 (0.07, 0.75 Mekonnen et al. as compared to those who were not referred from other health facilities. In terms of testing for heterogeneity, the Galbraith plot indicated homogeneity. Combining the results from seven studies, the forest plot displayed an overall estimated adjusted odds ratio (AOR) of 2.67 (95% confidence interval: 1.36–3.98; I 2 = 93.1%; P = 0.00) for the association between being referred from other health facilities and MNMs. The I -squared ( I 2 ) value confirmed the presence of substantial heterogeneity (Fig. ). When assessing publication bias, the funnel plot exhibited a symmetrical distribution. However, during Egger’s regression test, the p -value was 0.00, suggesting the presence of publication bias (Supplementary Fig. 8). To identify potential sources of heterogeneity in the pooled estimate analysis regarding being referred from other health facilities as a risk factor for MNMs in Ethiopia, a leave-one-out sensitivity analysis was conducted. The results of this study indicated that the findings were not reliant on a single study. The pooled estimate of being referred from other health facilities ranged from 1.96 (95% CI: 0.84–3.08) to 3.17 (95% CI: 1.79–4.45) when no studies were excluded (Supplementary Fig. 9). Educational statues Four studies found a significant association between being unable to read and write and MNMs. Among these studies, the highest risk-adjusted odds ratio (AOR) was 3.28 (2.28, 4.28) Liyew et al. and the lowest risk factor AOR = 1.14 (0.51, 2.31) Mekango DE et al. as compared to their educational status (college and above). Concerning the heterogeneity test, the Galbraith plot revealed homogeneity. Combining the results from four studies, the forest plot displayed an overall estimated adjusted odds ratio (AOR) of 2.48 (95% confidence interval: 1.59–3.36; I 2 = 67.5%; P = 0.026) for the association between being unable to read and write and MNMs. The I -squared ( I 2 ) value and p -value also indicated the presence of heterogeneity (Fig. ). In terms of publication bias, the Egger’s regression test yielded a p -value of 0.348, indicating the absence of publication bias (Supplementary Fig. 10). The pooled estimate of being unable to read and write ranged from 2.18 (95% CI: 1.25–3.12) to 2.83 (95% CI: 2.12–3.53) without the exclusion of any individual study (Supplementary Fig. 11).
The first database search discovered 540 articles. After removing duplicates, there were 388 distinct articles. Following the screening of titles and abstracts, 321 articles were excluded based on titles and 23 based on abstracts. The remaining articles were subject to a detailed full-text evaluation to determine their eligibility for inclusion. 6 studies were excluded due to differing outcome estimates, 18 because the outcome of interest was not reported, and an additional 2 papers were excluded due to the inaccessibility of the full text. As a result, a total of 13 studies were included in the final analysis (Fig. ).
Supplementary Table 2 summarizes the characteristics of the 1 3 included studies in the systematic review and meta-analysis, of which studies were from the Amhara region [ , , ], from the Oromia region [ , , ], studies were from the SNNP region , studies from the Harar region , study from the Tigray region , studies from Addis Abeba city. Four studies were cross-sectional, eight studies were case–control, and the other wasa cohort study. The studies included participants ranging from 183 to 29,697 (Supplementary Table 2).
Some of the studies ( n = 6) reported the prevalence of MNM [ , , , , , ]. The prevalence of MNM ranged from 0.8% up to 28.7% . The random-effects model analysis from those studies revealed that the pooled prevalence of MNM in Ethiopia was found to be 12.9% (95% CI; 6.30–19.49; I 2 = 98.3%; p < 0.001) (Fig. ).
A funnel plot showed asymmetrical distribution. Egger’s regression test p -value was 0.036, which indicated the presence of publication bias (Fig. ).
The prevalence of MNM was examined through subgroup analysis, stratifying by study region and study design. The findings revealed a prevalence of 24.85 in SNNPR, 16.19 in Oromia, 15.8 in Amhara, 9.21 in Harar, and 0.8 in Addis Ababa (Fig. ). Additionally, based on the study design, the prevalence of MNM was determined as 17.17 in cross-sectional studies, 9.21 in cohort studies, and 4.49 in case–control studies (Fig. ). In order to identify potential sources of heterogeneity in the analysis of MNM prevalence in Ethiopia, a leave-one-out sensitivity analysis was conducted. The results indicated that the findings were not reliant on a single study, ensuring the robustness of the overall conclusions. To assess publication bias, a funnel plot was examined, demonstrating a symmetrical distribution. Furthermore, the p -value from Egger’s regression test was 0.63, indicating the absence of publication bias (Supplementary Fig. 1).
History of cesarean section Out of the six studies analyzed, a notable correlation was observed between a history of cesarean sections and maternal near miss (MNM). Among these studies, the highest risk-adjusted odds ratio (AOR) was 7.68 (95% confidence interval: 5.69, 9.67) , and the lowest risk factor, AOR = 3.53 (2.22, 4.48), Dessalegn FN et al. as compared to those who had no history of cesarean section. In terms of testing for heterogeneity, the Galbraith plot indicated homogeneity. Combining the results from six studies, the forest plot displayed an overall estimated adjusted odds ratio (AOR) of 4.40 (95% confidence interval: 3.51, 5.28; I 2 = 64.4%; P = 0.0013), indicating moderate heterogeneity (Fig. ). The I -squared ( I 2 ) value and p -value also supported the presence of homogeneity. When assessing publication bias, the funnel plot exhibited a symmetrical distribution. However, during Egger’s regression test, the p -value was 0.049, suggesting the presence of publication bias (Supplementary Fig. 2). To identify potential sources of heterogeneity in the pooled estimate analysis regarding the association of a history of cesarean section as a risk factor for MNM in Ethiopia, a leave-one-out sensitivity analysis was conducted. The results of this analysis demonstrated that the findings were not reliant on a single study. The pooled estimate of having a history of cesarean section ranged from 3.99 (95% CI: 3.47–4.52) to 4.51 (95% CI: 3.42–5.60). Lack of antenatal visit Among the nine studies examined, a significant association was identified between a lack of antenatal care (ANC) visits and maternal near-misses. The highest risk factor was reported by Lemi K et al., with an adjusted odds ratio (AOR) of 6.02 (95% confidence interval: 3.69, 8.35) . On the other hand, the lowest risk factor was found with an AOR of 0.76 (95% confidence interval: 0.16, 1.36) Kasahun AW, Wako WG . The forest plot displayed an overall estimated adjusted odds ratio (AOR) of 3.09 (95% confidence interval: 2.12–4.05; I 2 = 86.6%; P = 0.00) for the association between a lack of antenatal care (ANC) visits and maternal near-misses. The I -squared ( I 2 ) value and p -value also indicated homogeneity among the studies (Fig. ). In terms of publication bias, the funnel plot exhibited an asymmetrical distribution. The p -value from Egger’s regression test was 0.002, suggesting the presence of publication bias. To identify potential sources of heterogeneity in the pooled estimate analysis regarding the lack of ANC visits as a risk factor for maternal near-misses in Ethiopia, a leave-one-out sensitivity analysis was conducted (Supplementary Fig. 3). Having chronic comorbidity Out of the seven studies included in the study, a significant association was observed between having chronic comorbidity and MNMs. Among these studies, the highest risk-adjusted odds ratio (AOR) was 12.10 (95% confidence interval: 8.61, 15.59). Teshome HN et al. and the lowest risk factor had an AOR of 2.04 (1.22, 2.86) Dessalegn FN et al. in comparison to individuals without chronic comorbidity, individuals with chronic comorbidity were found to have a significant association with MNMs. The Galbraith plot indicated homogeneity, and when combining the results of seven studies, the forest plot displayed an overall estimated adjusted odds ratio (AOR) of 4.70 (95% confidence interval: 2.97–6.42; I 2 = 93.2%; P = 0.00), indicating substantial heterogeneity (Fig. ). The I -squared ( I 2 ) value and p -value also supported the presence of heterogeneity. When examining publication bias, the funnel plot demonstrated a symmetrical distribution. However, during Egger’s regression test, the p -value was 0.001, suggesting the presence of publication bias (Supplementary Fig. 4). To identify potential sources of heterogeneity in the pooled estimate analysis regarding chronic comorbidity as a risk factor for maternal near-misses in Ethiopia, a leave-one-out sensitivity analysis was conducted. The results of this study indicated that the findings were not reliant on a single study. The pooled estimate of chronic comorbidity ranged from 3.42 (95% CI: 2.22–4.62) to 5.28 (95% CI: 3.18–7.39) when no studies were excluded (Supplementary Fig. 5). Rular residence Eleven studies found a significant association between ruler residency and MNMs. Among these studies, the highest risk-adjusted odds ratio (AOR) was = 13.00 (10.96, 15.04) in Liyew et al. and the lowest is AOR = 0.01 (− 0.86, 0.88) in Asaye MM . In terms of testing for heterogeneity, the Galbraith plot indicated homogeneity. Combining the results from seven studies, the forest plot displayed an overall estimated adjusted odds ratio (AOR) of 1.71 (95% confidence interval: 0.93–2.49; I 2 = 94.2%; P = 0.00) for the association between rural residency and MNMs. The I -squared ( I 2 ) value confirmed the presence of substantial heterogeneity (Fig. ). When assessing publication bias, the funnel plot exhibited a symmetrical distribution. However, during Egger’s regression test, the p -value was 0.008, suggesting the presence of publication bias (Supplementary Fig. 6). To identify potential sources of heterogeneity in the pooled estimate analysis regarding rural residency as a risk factor for MNMs in Ethiopia, a leave-one-out sensitivity analysis was conducted. The results of this analysis indicated that the findings were not reliant on a single study (Supplementary Fig. 7). Mode of admission Six studies found a significant association between referred from other health facilities and MNMs. Among these studies, the highest risk-adjusted odds ratio (AOR) was 7.47 (5.11, 9.83) Kasahun AW, Wako WG , and the lowest risk factor has an AOR of 0.41 (0.07, 0.75 Mekonnen et al. as compared to those who were not referred from other health facilities. In terms of testing for heterogeneity, the Galbraith plot indicated homogeneity. Combining the results from seven studies, the forest plot displayed an overall estimated adjusted odds ratio (AOR) of 2.67 (95% confidence interval: 1.36–3.98; I 2 = 93.1%; P = 0.00) for the association between being referred from other health facilities and MNMs. The I -squared ( I 2 ) value confirmed the presence of substantial heterogeneity (Fig. ). When assessing publication bias, the funnel plot exhibited a symmetrical distribution. However, during Egger’s regression test, the p -value was 0.00, suggesting the presence of publication bias (Supplementary Fig. 8). To identify potential sources of heterogeneity in the pooled estimate analysis regarding being referred from other health facilities as a risk factor for MNMs in Ethiopia, a leave-one-out sensitivity analysis was conducted. The results of this study indicated that the findings were not reliant on a single study. The pooled estimate of being referred from other health facilities ranged from 1.96 (95% CI: 0.84–3.08) to 3.17 (95% CI: 1.79–4.45) when no studies were excluded (Supplementary Fig. 9). Educational statues Four studies found a significant association between being unable to read and write and MNMs. Among these studies, the highest risk-adjusted odds ratio (AOR) was 3.28 (2.28, 4.28) Liyew et al. and the lowest risk factor AOR = 1.14 (0.51, 2.31) Mekango DE et al. as compared to their educational status (college and above). Concerning the heterogeneity test, the Galbraith plot revealed homogeneity. Combining the results from four studies, the forest plot displayed an overall estimated adjusted odds ratio (AOR) of 2.48 (95% confidence interval: 1.59–3.36; I 2 = 67.5%; P = 0.026) for the association between being unable to read and write and MNMs. The I -squared ( I 2 ) value and p -value also indicated the presence of heterogeneity (Fig. ). In terms of publication bias, the Egger’s regression test yielded a p -value of 0.348, indicating the absence of publication bias (Supplementary Fig. 10). The pooled estimate of being unable to read and write ranged from 2.18 (95% CI: 1.25–3.12) to 2.83 (95% CI: 2.12–3.53) without the exclusion of any individual study (Supplementary Fig. 11).
Out of the six studies analyzed, a notable correlation was observed between a history of cesarean sections and maternal near miss (MNM). Among these studies, the highest risk-adjusted odds ratio (AOR) was 7.68 (95% confidence interval: 5.69, 9.67) , and the lowest risk factor, AOR = 3.53 (2.22, 4.48), Dessalegn FN et al. as compared to those who had no history of cesarean section. In terms of testing for heterogeneity, the Galbraith plot indicated homogeneity. Combining the results from six studies, the forest plot displayed an overall estimated adjusted odds ratio (AOR) of 4.40 (95% confidence interval: 3.51, 5.28; I 2 = 64.4%; P = 0.0013), indicating moderate heterogeneity (Fig. ). The I -squared ( I 2 ) value and p -value also supported the presence of homogeneity. When assessing publication bias, the funnel plot exhibited a symmetrical distribution. However, during Egger’s regression test, the p -value was 0.049, suggesting the presence of publication bias (Supplementary Fig. 2). To identify potential sources of heterogeneity in the pooled estimate analysis regarding the association of a history of cesarean section as a risk factor for MNM in Ethiopia, a leave-one-out sensitivity analysis was conducted. The results of this analysis demonstrated that the findings were not reliant on a single study. The pooled estimate of having a history of cesarean section ranged from 3.99 (95% CI: 3.47–4.52) to 4.51 (95% CI: 3.42–5.60).
Among the nine studies examined, a significant association was identified between a lack of antenatal care (ANC) visits and maternal near-misses. The highest risk factor was reported by Lemi K et al., with an adjusted odds ratio (AOR) of 6.02 (95% confidence interval: 3.69, 8.35) . On the other hand, the lowest risk factor was found with an AOR of 0.76 (95% confidence interval: 0.16, 1.36) Kasahun AW, Wako WG . The forest plot displayed an overall estimated adjusted odds ratio (AOR) of 3.09 (95% confidence interval: 2.12–4.05; I 2 = 86.6%; P = 0.00) for the association between a lack of antenatal care (ANC) visits and maternal near-misses. The I -squared ( I 2 ) value and p -value also indicated homogeneity among the studies (Fig. ). In terms of publication bias, the funnel plot exhibited an asymmetrical distribution. The p -value from Egger’s regression test was 0.002, suggesting the presence of publication bias. To identify potential sources of heterogeneity in the pooled estimate analysis regarding the lack of ANC visits as a risk factor for maternal near-misses in Ethiopia, a leave-one-out sensitivity analysis was conducted (Supplementary Fig. 3).
Out of the seven studies included in the study, a significant association was observed between having chronic comorbidity and MNMs. Among these studies, the highest risk-adjusted odds ratio (AOR) was 12.10 (95% confidence interval: 8.61, 15.59). Teshome HN et al. and the lowest risk factor had an AOR of 2.04 (1.22, 2.86) Dessalegn FN et al. in comparison to individuals without chronic comorbidity, individuals with chronic comorbidity were found to have a significant association with MNMs. The Galbraith plot indicated homogeneity, and when combining the results of seven studies, the forest plot displayed an overall estimated adjusted odds ratio (AOR) of 4.70 (95% confidence interval: 2.97–6.42; I 2 = 93.2%; P = 0.00), indicating substantial heterogeneity (Fig. ). The I -squared ( I 2 ) value and p -value also supported the presence of heterogeneity. When examining publication bias, the funnel plot demonstrated a symmetrical distribution. However, during Egger’s regression test, the p -value was 0.001, suggesting the presence of publication bias (Supplementary Fig. 4). To identify potential sources of heterogeneity in the pooled estimate analysis regarding chronic comorbidity as a risk factor for maternal near-misses in Ethiopia, a leave-one-out sensitivity analysis was conducted. The results of this study indicated that the findings were not reliant on a single study. The pooled estimate of chronic comorbidity ranged from 3.42 (95% CI: 2.22–4.62) to 5.28 (95% CI: 3.18–7.39) when no studies were excluded (Supplementary Fig. 5).
Eleven studies found a significant association between ruler residency and MNMs. Among these studies, the highest risk-adjusted odds ratio (AOR) was = 13.00 (10.96, 15.04) in Liyew et al. and the lowest is AOR = 0.01 (− 0.86, 0.88) in Asaye MM . In terms of testing for heterogeneity, the Galbraith plot indicated homogeneity. Combining the results from seven studies, the forest plot displayed an overall estimated adjusted odds ratio (AOR) of 1.71 (95% confidence interval: 0.93–2.49; I 2 = 94.2%; P = 0.00) for the association between rural residency and MNMs. The I -squared ( I 2 ) value confirmed the presence of substantial heterogeneity (Fig. ). When assessing publication bias, the funnel plot exhibited a symmetrical distribution. However, during Egger’s regression test, the p -value was 0.008, suggesting the presence of publication bias (Supplementary Fig. 6). To identify potential sources of heterogeneity in the pooled estimate analysis regarding rural residency as a risk factor for MNMs in Ethiopia, a leave-one-out sensitivity analysis was conducted. The results of this analysis indicated that the findings were not reliant on a single study (Supplementary Fig. 7).
Six studies found a significant association between referred from other health facilities and MNMs. Among these studies, the highest risk-adjusted odds ratio (AOR) was 7.47 (5.11, 9.83) Kasahun AW, Wako WG , and the lowest risk factor has an AOR of 0.41 (0.07, 0.75 Mekonnen et al. as compared to those who were not referred from other health facilities. In terms of testing for heterogeneity, the Galbraith plot indicated homogeneity. Combining the results from seven studies, the forest plot displayed an overall estimated adjusted odds ratio (AOR) of 2.67 (95% confidence interval: 1.36–3.98; I 2 = 93.1%; P = 0.00) for the association between being referred from other health facilities and MNMs. The I -squared ( I 2 ) value confirmed the presence of substantial heterogeneity (Fig. ). When assessing publication bias, the funnel plot exhibited a symmetrical distribution. However, during Egger’s regression test, the p -value was 0.00, suggesting the presence of publication bias (Supplementary Fig. 8). To identify potential sources of heterogeneity in the pooled estimate analysis regarding being referred from other health facilities as a risk factor for MNMs in Ethiopia, a leave-one-out sensitivity analysis was conducted. The results of this study indicated that the findings were not reliant on a single study. The pooled estimate of being referred from other health facilities ranged from 1.96 (95% CI: 0.84–3.08) to 3.17 (95% CI: 1.79–4.45) when no studies were excluded (Supplementary Fig. 9).
Four studies found a significant association between being unable to read and write and MNMs. Among these studies, the highest risk-adjusted odds ratio (AOR) was 3.28 (2.28, 4.28) Liyew et al. and the lowest risk factor AOR = 1.14 (0.51, 2.31) Mekango DE et al. as compared to their educational status (college and above). Concerning the heterogeneity test, the Galbraith plot revealed homogeneity. Combining the results from four studies, the forest plot displayed an overall estimated adjusted odds ratio (AOR) of 2.48 (95% confidence interval: 1.59–3.36; I 2 = 67.5%; P = 0.026) for the association between being unable to read and write and MNMs. The I -squared ( I 2 ) value and p -value also indicated the presence of heterogeneity (Fig. ). In terms of publication bias, the Egger’s regression test yielded a p -value of 0.348, indicating the absence of publication bias (Supplementary Fig. 10). The pooled estimate of being unable to read and write ranged from 2.18 (95% CI: 1.25–3.12) to 2.83 (95% CI: 2.12–3.53) without the exclusion of any individual study (Supplementary Fig. 11).
This review aimed to investigate the prevalence and factors influencing MNMs in Ethiopia. The results revealed that MNMs pose a substantial public health concern in the country, highlighting the need for policymakers to develop strategies to enhance obstetric and maternal care in order to reduce these occurrences. The pooled prevalence of MNMs in this study was 12.9%, which was higher than the reported prevalence in Malaysia. Additionally, the MNM incidence ratio in Ethiopia was found to be 2.2 , in India from 7.6 to 15.6 , in Indonesia at 4.2%, in Brazil at 9.6 , incidence of MNM of 4.0% in low-income countries, the weighted pooled prevalence of MNMs was higher compared to developed countries. This disparity could be attributed to the higher socioeconomic status of developed countries, which enables them to have access to better facilities and resources for providing high-quality obstetric and maternal care. This approach is directly associated with how a country addresses the issue of MNMs. Furthermore, variations in study settings, study designs, and sample sizes could also contribute to the observed differences. However lower than in Tanzania the MNM incidence ratio was 23.6 and in Uganda (22.7%) , This could be attributed to the difficulty of diagnosing MNMs in many sub-Saharan African settings using the laboratory-based criteria recommended by the World Health Organization (WHO), especially in low-income settings. This challenge arises from factors such as inadequate infrastructure, limitations in making accurate diagnoses, and other related factors. In this review, mothers who had a history of referral from other health facilities showed higher odds of 2.67 (95% CI: 1.36–3.98) of developing a MNM; this finding was supported by another study [ , , , ]. One possible explanation is that the increase in the number of complex cases referred contributes to MNM events. Factors such as lack of transportation, long distances that hinder timely access to referral facilities (second delay), delayed referrals, and failure to identify potentially life-threatening complications early can all play a role. Another factor could be the association between a history of referrals and MNMs, as delays in receiving appropriate services have been linked to an elevated risk of experiencing a MNM event. This association suggests that the Regional Health Bureau and the Ministry of Health should consider integrating maternal intensive care units into each hospital to reduce the need for referrals. This review identified a strong correlation between a previous cesarean section and (MNM). The overall adjusted odds ratio (AOR) for a history of cesarean section was 4.40 (95% confidence interval: 3.51, 5.28), indicating a significantly higher risk of MNM compared to those without a history of cesarean section. It is worth noting that the global incidence of cesarean sections has been steadily rising and has reached a rate of 29.55% in Ethiopia. The occurrence of cesarean section during the current pregnancy was found to be prevalent among pregnant women with maternal near-misses MNM, and it was significantly associated with a fourfold increase in the risk of MNMs. This finding aligns with the results of previous studies, providing further support for the relationship between cesarean section and the occurrence of MNMs. [ – ]. This may be CS increased risk of thromboembolism, puerperal infection, hemorrhages, and anesthetic complications, as well as the inherent risks associated with cesarean sections such as blood loss, anesthetic risks, and postoperative complications, which may contribute to this phenomenon. Cesarean sections are recognized to carry potential health risks for women, and when compared to vaginal delivery, they may serve as a modifiable risk factor for maternal mortality. While a previous cesarean surgery can be life-saving for both the mother and the baby, it also raises the likelihood of complications such as hemorrhage, recurrence, placenta accretion in scar tissue, thrombosis, uterine rupture during subsequent vaginal attempts, and other factors that can increase the risk of MNMs. Therefore, it appears that cesarean section acts as an additional risk factor that heightens the likelihood of maternal near-misses. Furthermore, among women with pre-existing chronic conditions and MNMs, the overall adjusted odds ratio (AOR) for MNM was estimated to be 4.70 (95% confidence interval: 2.97–6.42). This finding aligns with previous studies conducted in the USA, Brazil, the Netherlands, and other locations, indicating a consistent association between pre-existing chronic conditions and the occurrence of maternal near-misses [ – ]. This could be attributed to the presence of comorbidities that significantly elevate the risk of complications such as superimposed pre-eclampsia, placental abruption, intrauterine growth retardation, and preterm delivery. Chronic hypertension, diabetes mellitus, and cardiovascular disease are indicators for referral to higher-level healthcare facilities. Promoting screening programs for non-communicable diseases would be a beneficial approach to reducing maternal near-misses. This review identified a notable correlation between the absence of formal education and MNMs, in comparison to individuals with a college education or higher. The overall adjusted odds ratio (AOR) for MNMs among those without formal education was estimated to be 2.48 (95% confidence interval: 1.59–3.36). Educational status emerges as another influential factor impacting the occurrence of MNMs. The pooled estimate suggests that individuals without formal education were 2.48 times more likely to experience MNMs compared to the educated group, aligning with findings from other studies [ , , ]. This could be attributed to the fact that women with lower levels of education may have less access to information, resulting in limited awareness of their health and the signs of potential pregnancy complications. Consequently, they may be less likely to utilize maternity healthcare services. Conversely, individuals with higher levels of education tend to have easier access to information, which is associated with a better understanding of their health, obstetric complications, and enhanced decision-making abilities. The review also revealed a significant association between antenatal care follow-up and MNMs. Women who did not receive antenatal care follow-up during their pregnancy had 3.09 times higher odds of experiencing near-miss events compared to those who received regular antenatal care. This finding aligns with similar studies conducted in Ethiopia, Nigeria, Pakistan, Bangladesh, and Bolivia [ , , , ]. This could be attributed to the lack of antenatal checkups, which can result in a lack of knowledge regarding the timing and signs of labor, optimal birthing locations, and when to seek professional assistance rather than managing things independently.
Maternal near-misses continue to be prevalent in Ethiopia, and several factors were identified as predictors of these events, including preexisting chronic conditions, lack of formal education, history of referral, lack of antenatal care, and history of cesarean section. To address this issue, it is crucial to focus on including strategies to increase access to education for women, especially in rural areas, to empower them with knowledge about maternal health. Strengthening antenatal care services, focusing on early detection and management of complications, is crucial. Additionally, proactive management of pre-existing chronic conditions during pregnancy, promoting safe delivery practices to reduce unnecessary cesarean sections, and improving the quality of referral systems are essential steps. Collaboration between healthcare providers, policymakers, Ethiopian Ministry of Health, and hospital is vital to implement these recommendations effectively and reduce the prevalence of maternal near-misses in Ethiopia.
To ensure the rigor of this review, we adhered to a predetermined search strategy and employed established methods for evaluating the quality of individual studies. Sensitivity and subgroup analyses were conducted based on the study region and design. Additionally, the authors employed trim-fill analysis to mitigate the potential influence of publication bias. Nevertheless, it is crucial to recognize that the selective incorporation of literature could potentially bias this study. Primarily, due to the cross-sectional design, the observed results cannot be interpreted as causal. The possibility of publication bias exists due to the exclusion of certain grey literature sources, alongside language biases arising from the restriction to English-language publications. Furthermore, the generalizability of the findings may be limited to countries with distinct socio-economic and cultural landscapes, given that the included studies were exclusively conducted in Ethiopia.
Supplementary Material 1: S1 File. PRISMA checklist Supplementary Material 2: Fig. 1 Publication bias for the Subgroup analysis of the prevalence of maternal near miss based on region and study design in Ethiopia. Fig. 2 Publication bias for the pooled estimate of AOR of having a history of cesarean section and MNM in Ethiopia. Fig. 3 Sensitivity for the pooled estimate of AOR of lack of ANC visit and MNM in Ethiopia. Fig. 4 Publication bias pooled estimate of AOR of having chronic comorbidity on MNM in Ethiopia. Fig. 5 Sensitivity pooled estimate of AOR of having chronic comorbidity on MNM in Ethiopia. Fig. 6Publication bias pooled estimate of AOR of rural residence on MNM in Ethiopia. Fig. 7 Sensitivity pooled estimate of AOR of rural residence on MNM in Ethiopia. Fig. 8 Trim and fill the pooled estimate of AOR of mode of admission on MNM in Ethiopia. Fig. 9 Sensitivity the pooled estimate of AOR of mode of admission on MNM in Ethiopia. Fig. 10 publication bias pooled estimate of AOR of educational statues on MNM in Ethiopia. Fig. 11 Sensitivity pooled estimate of AOR of educational statues on MNM in Ethiopia Supplementary Material 3: Supplementary Table 1: JBI Quality Assessment Scale for cross sectional studies to assess determinants of maternal near miss, 2023. Supplementary Table 2: Characteristics of research articles included in this systematic review and meta-analysis, 2023
|
Continuation of telemedicine in otolaryngology post-COVID-19: Applications by subspecialty | 0a2f8dc3-da71-4894-9c7f-06022533476c | 7816955 | Otolaryngology[mh] | Introduction On March 11, 2020 the World Health Organization (WHO) declared the outbreak of COVID-19 a global pandemic . In an effort to mitigate infection risk and spread of the virus, national stay-at-home and shelter-in-place orders were enacted, as well as closures of non-essential businesses and public venues in order to reduce traffic in otherwise heavily populated areas . The rapidity with which this pandemic arose has raised considerable concerns regarding depletion of healthcare resources and personnel . Federal and state governments as well as hospital systems across the country enacted initiatives to address these concerns, including cancellation of non-essential services, postponement of elective surgical cases, and reduction of on-site providers . These measures were quickly issued in an effort to conserve personal protection equipment (PPE), increase capacity in healthcare facilities, limit exposure of healthcare workers, and reduce virus transmission rates . Otolaryngologists, as well as other providers such as emergency medicine physicians and anesthesiologists, routinely perform aerosol-generating procedures, placing them at relatively higher risk than other specialties . Otolaryngologists are also at a unique risk during rhinologic examination and procedures due to the predilection of viral particles for the nasal cavities and nasopharynx . To respond to the pandemic, various practice modifications and alternatives have been implemented to protect otolaryngologists and patients from this high exposure risk . Initially, actions were taken to cancel clinics and elective cases, limit flexible laryngoscopy examinations and nasal endoscopy to only when necessary, avoid the use of topical decongestants and anesthetics, and practice stricter utilization of PPE . The rapidity of closures in response to the COVID-19 pandemic resulted in an unanticipated and abrupt disruption to the routine patient-care workflow [ , , ]. Despite the importance of mitigating the impact of the pandemic, safe and timely patient care remains a priority [ , , ]. Telemedicine services have risen to accommodate the need for continued patient care while allowing observance of social distancing practices [ , , ]. This alternative approach to patient interaction allows audio and visual communication via virtual means [ , , ]. Platforms such as Zoom, Doxy.my, FaceTime, and others have rapidly come to the forefront of everyday medical practice to facilitate continued patient care [ , , , ]. The WHO describes telemedicine as follows: “…using information and communication technologies for the exchange of valid information for diagnosis, treatment and prevention of disease and injuries, research and evaluation, and for the continuing education of health care providers, all in the interests of advancing the health of individuals and their communities.” This broad description can be summarized as the use of virtual communication methods in order to facilitate patient care. The methods of communication can be synchronous or asynchronous . A synchronous form of telemedicine refers to a real-time audiovisual interaction such as a Zoom call . Asynchronous is a term that describes store-and-forward methods of communications, such as images or video recorded by the patient and reviewed by the provider at a later time .
Methods Pubmed and Google Scholar were queried using combined key words such as “telemedicine,” “covid” and “otolaryngology.” Additional queries were made with particular subspecialty phrases such as “rhinology” or “otology” to maximize yield of relevant titles. Our initial search yielded approximately 279 related results. These were screened for relevance and 100 abstracts were selected for abstract review. We included articles that specifically discussed the use of telemedicine within the context of the COVID-19 pandemic. Abstracts were excluded if they were not in English, not related to otolaryngology, or if the full text was unavailable for access. Of these, 37 articles were selected for complete review of the full text. Applicable abstracts were then selected for review of the full text.
Results 3.1 Facial plastic surgery Despite the recent wave to utilize telemedicine, the application of telemedicine within plastic surgery is not a new concept [ , , ]. The use of telemedicine has been documented in acute plastic surgery cases, observation of chronic cases, postoperative monitoring for surgical site healing, close follow up of microvascular reconstructive cases, and remote management of wounds [ , , ]. Telemedicine can also be used to enhance multidisciplinary collaboration and provide virtual supervision in cases that require it [ , , , , , ]. Establishing management algorithms that integrate telemedicine into routine practice would facilitate uninterrupted patient care in a safe manner while limiting unnecessary exposure of patients and providers in the post-COVID-19 era [ , , , , , , ]. A study by Jones et al. aimed to establish the accuracy of asynchronous digital images to aid decision making in acute plastic surgery consultations . They concluded that not only were the digital images sufficiently accurate, this method of data transfer also improved provider decision making with regards to operative priority . A separate study by Trovato et al. also demonstrated the accuracy of digital images by establishing similar outcomes to on-site examination . Furthermore, Clegg et al. found that virtual care, using synchronous telemedicine consultation, is comparable to traditional in-person consultation with the added benefit of reduction in transportation costs and decrease in the amount of time it takes a consult to be completed from time of request . In the outpatient facial plastic surgery setting, in-person evaluation is likely irreplaceable for select cases. Procedures such as botox treatments, filler injections, and laser therapies would continue to require direct patient interaction. In light of this, telemedicine may effectively be implemented as a supplement to in-person clinic visits in both synchronous and asynchronous methods. A review by Shokri et al. describes the application of telemedicine within facial plastic surgery for initial consultation and for postoperative counseling . Their experience also demonstrates the role of virtual multidisciplinary care through their use of telemedicine in a facial nerve clinic, with collaboration from a physical therapist and a facial therapist . They report high patient satisfaction with their virtual methods . Another limitation of telemedicine in cosmetic facial plastic surgery is the challenge of obtaining optimized photographs for surgical planning. Conventionally, standardized lighting, background, and positioning are used to achieve optimal photographic results. Tower et al. addressed this challenge by identifying a method of “screenshot photography” in order to coach patients on how to take high quality photographs to help with remote pre-operative planning and documentation . These studies demonstrate that timely and accurate cosmetic facial plastic consultations can be effectively achieved while limiting nonessential contact in the post-COVID-19 era, with the added benefit of reducing burden of cost, transportation, and time delay, compared to face-to-face interaction [ , , ]. 3.2 Otology Of all the specialties of otolaryngology, otology is perhaps the most amenable to a telemedicine platform . McCool and Davies conducted a retrospective cohort study in order to determine which clinical diagnoses within otolaryngology were most eligible for evaluation over telemedicine visits . They found that overall, 62% of ear, nose, and throat consultations were eligible for telemedicine evaluation, and of those, inner and middle ear complaints were the most likely to be eligible . Over 80% of middle ear complaints and over 90% of inner ear complaints were eligible for telemedicine consultation because they less commonly require a procedure to reach a diagnosis . Telemedicine allows providers to expedite the otologic evaluation of patients who otherwise may have been delayed due to the volume of most practices . One study notes that prior to telemedicine use, 47% of audiology and ENT patients would wait at least 5 months for in person new appointments . Implementation of telemedicine allowed this number to decrease to 8% within the first 3 years and less than 3% in the following 3 years after that . Innovative devices that aid in the remote detection and capture of otologic pathology have also been investigated as a means to facilitate virtual evaluation . One such device involves the use of a smartphone-enabled otoscope, which captures images of the tympanic membrane for evaluation by an otolaryngologist remotely . One study of smartphone-enabled otoscopy reports a 96% specificity in identifying normal tympanic membranes and 100% sensitivity in identifying pathology . With a 97% positive predictive value and small false-positive rate, this technology could be useful as a screening tool, reducing the need for unnecessary in-person specialty care visits . One limitation to widespread use of smartphone-enabled otoscopy tools is the requirement for patients to access this device, which may be expensive or unavailable in certain areas . An alternative application would be the use of this technology in primary care practices with subsequent forwarding of the images to the otologist . When trained healthcare workers are equipped with a smartphone-enabled otoscope, store and forward telemedicine allows for adequate screening of otology patients in the community while minimizing unnecessary in person evaluations . The use of these devices has yet to be completely validated, however their application in screening and mitigating overpopulation in ENT clinics is evident . Thus, expediting processes to validate and incorporate these technologies into routine practice would allow their widespread use, which becomes particularly important during times of natural disasters or global pandemics . Arriaga et al. demonstrate how invaluable this technology is when used with telemedicine during the aftermath of Hurricane Katrina, during which time patient access to otology and neuro-otology care was significantly compromised . Their experience can be extrapolated to modern use during the current public health crisis of the COVID-19 pandemic . In order to standardize and streamline patient care via telemedicine, diagnostic and treatment algorithms amenable to virtual practice should be investigated and created . Chari, et al. offer an algorithm for the management of dizzy patients designed for a telemedicine platform . The algorithm emphasizes initial triage of patients with potentially life threatening neurologic or cardiovascular conditions . Their next step aims to address patients who can begin to implement generic interventions that do not require diagnostic precision . Finally, they seek to identify patients whose conditions require further assessment . The patient history is a key component in the evaluation of a complaint of dizziness and often, history can delineate otologic versus non-otologic etiologies, which makes this complaint particularly amenable to an initial virtual consultation . 3.3 Rhinology The predilection of COVID-19 for the nasal cavity, nasopharynx, and oropharynx has been well described and creates a unique risk to rhinologists who routinely manipulate these anatomic regions . In addition to examination of the nasal cavities and oropharynx, manipulation of the nasal cavities and nasopharynx during nasal endoscopy, epistaxis management, debridements, biopsies, and other common in-office and surgical procedures pose a particularly high risk of aerosolization to the provider and staff . During the initial phases of the pandemic, practices were modified in order to address the rhinology-specific concerns regarding high transmission risk . As the initial wave of the pandemic in certain parts of the country declines and clinics resume practice, there is a new wave of concern, namely the wave of patients who had their visits deferred. With the increase in clinical activity, new guidelines will need to be enacted in the evaluation and treatment of rhinologic patients . Setzen, et al. studied the impact that the COVID-19 pandemic has had in rhinologic practice patterns, including changes in practice volume, usage of telemedicine, usage of PPE, implementation of in-office rhinologic procedures, and physician wellbeing. These parameters were studied by sending a 15-question survey to the members of the American Rhinologic Society (ARS) . They identified that 96.2% of respondents had begun incorporating telemedicine in response to the pandemic, demonstrating that rhinologic visits are amenable to telemedicine visits . For example, follow up patients who require treatment modification for allergic rhinitis or chronic sinusitis may be evaluated virtually based on symptoms, particularly in those who have had prior nasal endoscopy performed. If symptoms persist or worsen, an in-person evaluation may then be desired. Additionally, a lower threshold may be employed to utilize imaging such as CT scans, particularly in positive or unknown cases. Images amenable to remote evaluation can be used to initiate treatment plan discussions with patients without necessarily requiring face-to-face contact . Creating consensus guidelines to standardize practices would give guidance and support for practicing rhinologists in deciding which cases need in-person evaluation and which would be amenable to telemedicine consultation . Additionally, due to a key symptom of anosmia as a characteristic feature of COVID-19 infection, rhinologists are uniquely positioned to evaluate and assess this complaint and may be the first to identify infected patients. Klimek et al. demonstrate the ability to quantify olfactory dysfunction via telemedicine . Despite anosmia being a key feature of asymptomatic carriers of COVID-19, olfactory disturbance is a common complaint in most rhinology practices, making the ability to discriminate high-risk patients from low risk patients even more difficult . Employing telemedicine as an adjunct to practice and increasing telemedicine usage in hotspot geographic regions or during time periods when COVID-19 case numbers increase can greatly aid in mitigating infection spread and preserve PPE . 3.4 Pediatrics The impact of COVID-19 infection in pediatric patients is a topic currently under investigation . Initial reports seemed to suggest that children were somewhat protected from infection, however later studies actually show evidence of an inflammatory syndrome similar to Kawasaki's disease associated with COVID-19 infection within the pediatric population . Furthermore, children play a key role in community-based transmission by functioning as asymptomatic carriers . Therefore, the need to limit spread becomes equally as important in the pediatric otolaryngology clinic as in the adult and telemedicine platforms have a key role . Additionally, diagnosing and treating children in-person often requires the presence of adult care-takers, increasing the number of people on site at a given time, further exemplifying the need for telemedicine in this population. A retrospective study by Smith et al. compared diagnosis and management plans completed via videoconference with those completed by face-to-face interactions in a pediatric otolaryngology clinic . They found that the recorded diagnosis was the same in 99% of cases, indicating high diagnostic accuracy of telemedicine evaluations . Furthermore, they found that surgical management decisions were the same 93% of the time. From diagnostic accuracy and presurgical standpoints, employing telemedicine is feasible for a pediatric otolaryngology practice . There are challenges, however, with regards to the limitations of physical examination. In pediatric patients, obtaining a complete physical exam is often difficult in person and can be even more difficult on a virtual platform . Despite this, the implementation of telehealth can be exceedingly useful in contexts that involve counseling, family education, or long-term management discussions, such as for cochlear implant candidates or microtia [ , , ]. In addition, often times parents seek guidance regarding seemingly concerning symptoms, which may be less alarming to the trained pediatric otolaryngologist . Reassurance and guidance can be provided via telehealth visits in certain cases, such as known mild laryngomalacia or obstructive sleep apnea . Notable cases would then be recommended for in person follow up. Another domain in which telemedicine can be utilized and integrated into clinical care is the care of cleft lip and palate patients . Patients undergoing cleft lip and palate repair require comprehensive and multidisciplinary care for a prolonged period of time [ , , ]. Costa et al. demonstrate the feasibility of telemedicine for the initial evaluation and for continued postoperative management of cleft patients in the Southern United States and Mexico, with alleviated cost and travel burdens on patient families and providers, extending specialty care to otherwise underserved areas . Their retrospective study generated a perioperative treatment algorithm that effectively incorporates telemedicine in cleft care . This model allows providers to extend specialty care to broad geographic areas, limit cost, time, and travel burden on patients and families, and obtain consistent follow up . 3.5 Laryngology The use of telemedicine within the field of laryngology is not new to the COVID-19 era, however the need for its incorporation into routine practice has become essential. In 2018, Bryson et al. demonstrated that high quality flexible laryngoscopy and videostroboscopy images can be transmitted electronically to off-site laryngologists . The application of this technology would be to connect to specialists who may offer consultation services to providers in remote or rural areas . This application, however, would be limited by the necessity of an on-site provider trained in performing laryngoscopy and stroboscopy . Flexible laryngoscopy, however, has been noted to be an aerosol-generating procedure . In the post COVID-19 era, the utility of remotely sharing laryngeal pathology via telemedicine could help in limiting the number of repeat laryngoscopies . This would be particularly helpful in cases where patients request second opinions or diagnoses that can be monitored based on symptoms, such as laryngopharyngeal reflux, for example. Additionally, in institutions with multiple members to the otolaryngology team who may need to view the laryngoscopy examination, the number of scope exams performed could be limited by appointing one examiner who captures the image while the others review the information remotely to aid in guidance and management. This would limit the number of personnel in a patient room during an aerosol-generating procedure. Furthermore, in patients with known pathology, voice therapy sessions and follow up visits have been shown to be amenable to telemedicine platforms . Doarn et al. discuss the implementation of a virtual portal in order to facilitate remote voice therapy sessions via telemedicine to patients with voice disorders. Thus, telemedicine certainly has a role in supplementing laryngology practices and should be utilized more frequently during times of acute “waves” or in locations where infection rates remain high. 3.6 Head and neck 3.6.1 Oncology management Head and neck cancer patients are a unique entity within otolaryngology in that they often need prompt and consistent management to limit progression of their cancer . Unfortunately, cancer patients are also at higher risk of suffering complications related to COVID-19 infection . Therefore, consideration must be taken to protect this vulnerable patient population while simultaneously taking steps to deliver timely and accurate oncologic management . Judicious use of telemedicine platforms can help providers balance these risks . Telemedicine can help alert the provider of any new or subtle changes in symptoms without the need for high risk face-to-face contact . The MD Anderson Head and Neck Surgery Consortium has created guidelines for management of head and neck cancer by subsite and telemedicine is incorporated as an essential tool that should be used judiciously whenever feasible . A diagnosis of head and neck cancer often comes with quality of life challenges that patients must cope with . Use of telemedicine allows patients to maintain a stream of communication with their cancer provider has been shown to reduce the emotional burden, quality of life compromise, and symptom distress that patients face . Pfeifer et al. performed a randomized control trial to compare the impact on quality of life and symptom distress in patients utilizing telehealth versus standard of care . They found that head and neck cancer patients who were monitored via the telehealth intervention reported significantly better QoL and a lower symptom burden posttreatment compared with patients who received routine cancer care . 3.6.2 Microvascular and free flaps Patients who undergo microvascular reconstruction and free flap surgery require close and constant postoperative monitoring to ensure flap viability. Although this is currently largely accomplished by direct clinical care, methods of remote monitoring are in progress and may have future applications . Kiranantawat et al. developed a smartphone application to monitor flaps postoperatively by assessing perfusion via skin color . They report 98% sensitivity and 94% specificity to detect abnormal perfusion. They also report 84% accuracy in grading severity of occlusion . Although this new platform is promising, further studies are required for clinical validation and widespread use. Even without the use of novel smartphone applications, however, digital images and audiovisual resources can be used to monitor flaps remotely, evaluate surgical site healing, and direct decision-making for in person evaluation or return to the operating room accordingly .
Facial plastic surgery Despite the recent wave to utilize telemedicine, the application of telemedicine within plastic surgery is not a new concept [ , , ]. The use of telemedicine has been documented in acute plastic surgery cases, observation of chronic cases, postoperative monitoring for surgical site healing, close follow up of microvascular reconstructive cases, and remote management of wounds [ , , ]. Telemedicine can also be used to enhance multidisciplinary collaboration and provide virtual supervision in cases that require it [ , , , , , ]. Establishing management algorithms that integrate telemedicine into routine practice would facilitate uninterrupted patient care in a safe manner while limiting unnecessary exposure of patients and providers in the post-COVID-19 era [ , , , , , , ]. A study by Jones et al. aimed to establish the accuracy of asynchronous digital images to aid decision making in acute plastic surgery consultations . They concluded that not only were the digital images sufficiently accurate, this method of data transfer also improved provider decision making with regards to operative priority . A separate study by Trovato et al. also demonstrated the accuracy of digital images by establishing similar outcomes to on-site examination . Furthermore, Clegg et al. found that virtual care, using synchronous telemedicine consultation, is comparable to traditional in-person consultation with the added benefit of reduction in transportation costs and decrease in the amount of time it takes a consult to be completed from time of request . In the outpatient facial plastic surgery setting, in-person evaluation is likely irreplaceable for select cases. Procedures such as botox treatments, filler injections, and laser therapies would continue to require direct patient interaction. In light of this, telemedicine may effectively be implemented as a supplement to in-person clinic visits in both synchronous and asynchronous methods. A review by Shokri et al. describes the application of telemedicine within facial plastic surgery for initial consultation and for postoperative counseling . Their experience also demonstrates the role of virtual multidisciplinary care through their use of telemedicine in a facial nerve clinic, with collaboration from a physical therapist and a facial therapist . They report high patient satisfaction with their virtual methods . Another limitation of telemedicine in cosmetic facial plastic surgery is the challenge of obtaining optimized photographs for surgical planning. Conventionally, standardized lighting, background, and positioning are used to achieve optimal photographic results. Tower et al. addressed this challenge by identifying a method of “screenshot photography” in order to coach patients on how to take high quality photographs to help with remote pre-operative planning and documentation . These studies demonstrate that timely and accurate cosmetic facial plastic consultations can be effectively achieved while limiting nonessential contact in the post-COVID-19 era, with the added benefit of reducing burden of cost, transportation, and time delay, compared to face-to-face interaction [ , , ].
Otology Of all the specialties of otolaryngology, otology is perhaps the most amenable to a telemedicine platform . McCool and Davies conducted a retrospective cohort study in order to determine which clinical diagnoses within otolaryngology were most eligible for evaluation over telemedicine visits . They found that overall, 62% of ear, nose, and throat consultations were eligible for telemedicine evaluation, and of those, inner and middle ear complaints were the most likely to be eligible . Over 80% of middle ear complaints and over 90% of inner ear complaints were eligible for telemedicine consultation because they less commonly require a procedure to reach a diagnosis . Telemedicine allows providers to expedite the otologic evaluation of patients who otherwise may have been delayed due to the volume of most practices . One study notes that prior to telemedicine use, 47% of audiology and ENT patients would wait at least 5 months for in person new appointments . Implementation of telemedicine allowed this number to decrease to 8% within the first 3 years and less than 3% in the following 3 years after that . Innovative devices that aid in the remote detection and capture of otologic pathology have also been investigated as a means to facilitate virtual evaluation . One such device involves the use of a smartphone-enabled otoscope, which captures images of the tympanic membrane for evaluation by an otolaryngologist remotely . One study of smartphone-enabled otoscopy reports a 96% specificity in identifying normal tympanic membranes and 100% sensitivity in identifying pathology . With a 97% positive predictive value and small false-positive rate, this technology could be useful as a screening tool, reducing the need for unnecessary in-person specialty care visits . One limitation to widespread use of smartphone-enabled otoscopy tools is the requirement for patients to access this device, which may be expensive or unavailable in certain areas . An alternative application would be the use of this technology in primary care practices with subsequent forwarding of the images to the otologist . When trained healthcare workers are equipped with a smartphone-enabled otoscope, store and forward telemedicine allows for adequate screening of otology patients in the community while minimizing unnecessary in person evaluations . The use of these devices has yet to be completely validated, however their application in screening and mitigating overpopulation in ENT clinics is evident . Thus, expediting processes to validate and incorporate these technologies into routine practice would allow their widespread use, which becomes particularly important during times of natural disasters or global pandemics . Arriaga et al. demonstrate how invaluable this technology is when used with telemedicine during the aftermath of Hurricane Katrina, during which time patient access to otology and neuro-otology care was significantly compromised . Their experience can be extrapolated to modern use during the current public health crisis of the COVID-19 pandemic . In order to standardize and streamline patient care via telemedicine, diagnostic and treatment algorithms amenable to virtual practice should be investigated and created . Chari, et al. offer an algorithm for the management of dizzy patients designed for a telemedicine platform . The algorithm emphasizes initial triage of patients with potentially life threatening neurologic or cardiovascular conditions . Their next step aims to address patients who can begin to implement generic interventions that do not require diagnostic precision . Finally, they seek to identify patients whose conditions require further assessment . The patient history is a key component in the evaluation of a complaint of dizziness and often, history can delineate otologic versus non-otologic etiologies, which makes this complaint particularly amenable to an initial virtual consultation .
Rhinology The predilection of COVID-19 for the nasal cavity, nasopharynx, and oropharynx has been well described and creates a unique risk to rhinologists who routinely manipulate these anatomic regions . In addition to examination of the nasal cavities and oropharynx, manipulation of the nasal cavities and nasopharynx during nasal endoscopy, epistaxis management, debridements, biopsies, and other common in-office and surgical procedures pose a particularly high risk of aerosolization to the provider and staff . During the initial phases of the pandemic, practices were modified in order to address the rhinology-specific concerns regarding high transmission risk . As the initial wave of the pandemic in certain parts of the country declines and clinics resume practice, there is a new wave of concern, namely the wave of patients who had their visits deferred. With the increase in clinical activity, new guidelines will need to be enacted in the evaluation and treatment of rhinologic patients . Setzen, et al. studied the impact that the COVID-19 pandemic has had in rhinologic practice patterns, including changes in practice volume, usage of telemedicine, usage of PPE, implementation of in-office rhinologic procedures, and physician wellbeing. These parameters were studied by sending a 15-question survey to the members of the American Rhinologic Society (ARS) . They identified that 96.2% of respondents had begun incorporating telemedicine in response to the pandemic, demonstrating that rhinologic visits are amenable to telemedicine visits . For example, follow up patients who require treatment modification for allergic rhinitis or chronic sinusitis may be evaluated virtually based on symptoms, particularly in those who have had prior nasal endoscopy performed. If symptoms persist or worsen, an in-person evaluation may then be desired. Additionally, a lower threshold may be employed to utilize imaging such as CT scans, particularly in positive or unknown cases. Images amenable to remote evaluation can be used to initiate treatment plan discussions with patients without necessarily requiring face-to-face contact . Creating consensus guidelines to standardize practices would give guidance and support for practicing rhinologists in deciding which cases need in-person evaluation and which would be amenable to telemedicine consultation . Additionally, due to a key symptom of anosmia as a characteristic feature of COVID-19 infection, rhinologists are uniquely positioned to evaluate and assess this complaint and may be the first to identify infected patients. Klimek et al. demonstrate the ability to quantify olfactory dysfunction via telemedicine . Despite anosmia being a key feature of asymptomatic carriers of COVID-19, olfactory disturbance is a common complaint in most rhinology practices, making the ability to discriminate high-risk patients from low risk patients even more difficult . Employing telemedicine as an adjunct to practice and increasing telemedicine usage in hotspot geographic regions or during time periods when COVID-19 case numbers increase can greatly aid in mitigating infection spread and preserve PPE .
Pediatrics The impact of COVID-19 infection in pediatric patients is a topic currently under investigation . Initial reports seemed to suggest that children were somewhat protected from infection, however later studies actually show evidence of an inflammatory syndrome similar to Kawasaki's disease associated with COVID-19 infection within the pediatric population . Furthermore, children play a key role in community-based transmission by functioning as asymptomatic carriers . Therefore, the need to limit spread becomes equally as important in the pediatric otolaryngology clinic as in the adult and telemedicine platforms have a key role . Additionally, diagnosing and treating children in-person often requires the presence of adult care-takers, increasing the number of people on site at a given time, further exemplifying the need for telemedicine in this population. A retrospective study by Smith et al. compared diagnosis and management plans completed via videoconference with those completed by face-to-face interactions in a pediatric otolaryngology clinic . They found that the recorded diagnosis was the same in 99% of cases, indicating high diagnostic accuracy of telemedicine evaluations . Furthermore, they found that surgical management decisions were the same 93% of the time. From diagnostic accuracy and presurgical standpoints, employing telemedicine is feasible for a pediatric otolaryngology practice . There are challenges, however, with regards to the limitations of physical examination. In pediatric patients, obtaining a complete physical exam is often difficult in person and can be even more difficult on a virtual platform . Despite this, the implementation of telehealth can be exceedingly useful in contexts that involve counseling, family education, or long-term management discussions, such as for cochlear implant candidates or microtia [ , , ]. In addition, often times parents seek guidance regarding seemingly concerning symptoms, which may be less alarming to the trained pediatric otolaryngologist . Reassurance and guidance can be provided via telehealth visits in certain cases, such as known mild laryngomalacia or obstructive sleep apnea . Notable cases would then be recommended for in person follow up. Another domain in which telemedicine can be utilized and integrated into clinical care is the care of cleft lip and palate patients . Patients undergoing cleft lip and palate repair require comprehensive and multidisciplinary care for a prolonged period of time [ , , ]. Costa et al. demonstrate the feasibility of telemedicine for the initial evaluation and for continued postoperative management of cleft patients in the Southern United States and Mexico, with alleviated cost and travel burdens on patient families and providers, extending specialty care to otherwise underserved areas . Their retrospective study generated a perioperative treatment algorithm that effectively incorporates telemedicine in cleft care . This model allows providers to extend specialty care to broad geographic areas, limit cost, time, and travel burden on patients and families, and obtain consistent follow up .
Laryngology The use of telemedicine within the field of laryngology is not new to the COVID-19 era, however the need for its incorporation into routine practice has become essential. In 2018, Bryson et al. demonstrated that high quality flexible laryngoscopy and videostroboscopy images can be transmitted electronically to off-site laryngologists . The application of this technology would be to connect to specialists who may offer consultation services to providers in remote or rural areas . This application, however, would be limited by the necessity of an on-site provider trained in performing laryngoscopy and stroboscopy . Flexible laryngoscopy, however, has been noted to be an aerosol-generating procedure . In the post COVID-19 era, the utility of remotely sharing laryngeal pathology via telemedicine could help in limiting the number of repeat laryngoscopies . This would be particularly helpful in cases where patients request second opinions or diagnoses that can be monitored based on symptoms, such as laryngopharyngeal reflux, for example. Additionally, in institutions with multiple members to the otolaryngology team who may need to view the laryngoscopy examination, the number of scope exams performed could be limited by appointing one examiner who captures the image while the others review the information remotely to aid in guidance and management. This would limit the number of personnel in a patient room during an aerosol-generating procedure. Furthermore, in patients with known pathology, voice therapy sessions and follow up visits have been shown to be amenable to telemedicine platforms . Doarn et al. discuss the implementation of a virtual portal in order to facilitate remote voice therapy sessions via telemedicine to patients with voice disorders. Thus, telemedicine certainly has a role in supplementing laryngology practices and should be utilized more frequently during times of acute “waves” or in locations where infection rates remain high.
Head and neck 3.6.1 Oncology management Head and neck cancer patients are a unique entity within otolaryngology in that they often need prompt and consistent management to limit progression of their cancer . Unfortunately, cancer patients are also at higher risk of suffering complications related to COVID-19 infection . Therefore, consideration must be taken to protect this vulnerable patient population while simultaneously taking steps to deliver timely and accurate oncologic management . Judicious use of telemedicine platforms can help providers balance these risks . Telemedicine can help alert the provider of any new or subtle changes in symptoms without the need for high risk face-to-face contact . The MD Anderson Head and Neck Surgery Consortium has created guidelines for management of head and neck cancer by subsite and telemedicine is incorporated as an essential tool that should be used judiciously whenever feasible . A diagnosis of head and neck cancer often comes with quality of life challenges that patients must cope with . Use of telemedicine allows patients to maintain a stream of communication with their cancer provider has been shown to reduce the emotional burden, quality of life compromise, and symptom distress that patients face . Pfeifer et al. performed a randomized control trial to compare the impact on quality of life and symptom distress in patients utilizing telehealth versus standard of care . They found that head and neck cancer patients who were monitored via the telehealth intervention reported significantly better QoL and a lower symptom burden posttreatment compared with patients who received routine cancer care . 3.6.2 Microvascular and free flaps Patients who undergo microvascular reconstruction and free flap surgery require close and constant postoperative monitoring to ensure flap viability. Although this is currently largely accomplished by direct clinical care, methods of remote monitoring are in progress and may have future applications . Kiranantawat et al. developed a smartphone application to monitor flaps postoperatively by assessing perfusion via skin color . They report 98% sensitivity and 94% specificity to detect abnormal perfusion. They also report 84% accuracy in grading severity of occlusion . Although this new platform is promising, further studies are required for clinical validation and widespread use. Even without the use of novel smartphone applications, however, digital images and audiovisual resources can be used to monitor flaps remotely, evaluate surgical site healing, and direct decision-making for in person evaluation or return to the operating room accordingly .
Oncology management Head and neck cancer patients are a unique entity within otolaryngology in that they often need prompt and consistent management to limit progression of their cancer . Unfortunately, cancer patients are also at higher risk of suffering complications related to COVID-19 infection . Therefore, consideration must be taken to protect this vulnerable patient population while simultaneously taking steps to deliver timely and accurate oncologic management . Judicious use of telemedicine platforms can help providers balance these risks . Telemedicine can help alert the provider of any new or subtle changes in symptoms without the need for high risk face-to-face contact . The MD Anderson Head and Neck Surgery Consortium has created guidelines for management of head and neck cancer by subsite and telemedicine is incorporated as an essential tool that should be used judiciously whenever feasible . A diagnosis of head and neck cancer often comes with quality of life challenges that patients must cope with . Use of telemedicine allows patients to maintain a stream of communication with their cancer provider has been shown to reduce the emotional burden, quality of life compromise, and symptom distress that patients face . Pfeifer et al. performed a randomized control trial to compare the impact on quality of life and symptom distress in patients utilizing telehealth versus standard of care . They found that head and neck cancer patients who were monitored via the telehealth intervention reported significantly better QoL and a lower symptom burden posttreatment compared with patients who received routine cancer care .
Microvascular and free flaps Patients who undergo microvascular reconstruction and free flap surgery require close and constant postoperative monitoring to ensure flap viability. Although this is currently largely accomplished by direct clinical care, methods of remote monitoring are in progress and may have future applications . Kiranantawat et al. developed a smartphone application to monitor flaps postoperatively by assessing perfusion via skin color . They report 98% sensitivity and 94% specificity to detect abnormal perfusion. They also report 84% accuracy in grading severity of occlusion . Although this new platform is promising, further studies are required for clinical validation and widespread use. Even without the use of novel smartphone applications, however, digital images and audiovisual resources can be used to monitor flaps remotely, evaluate surgical site healing, and direct decision-making for in person evaluation or return to the operating room accordingly .
Discussion The use of telemedicine is not new to the COVID-19 era . Although different forms of telecommunication have existed for decades, their use in medicine remained largely limited due to several factors. First, private health information becomes more difficult to monitor when using third party platforms, which pose a risk to the security of sensitive patient data [ , , , , ]. Secondly, insurance company reimbursements are limited for visits conducted virtually, lowering incentive for providers to invest time in these types of visits [ , , , ]. Furthermore, the inability to conduct in-person physical examinations limits the amount of information a provider is able to obtain from a patient visit . Additionally, the potential medico-legal consequences of virtual visits further limit the incentive for providers to utilize this resource . In response to the COVID-19 pandemic, federal and state governments have amended policies and lifted prior restrictions on alternative modes of patient care . This has allowed virtual forms of communication to supplement, and in many cases, substitute in-person visits. Notably, the Department of Health and Human Services (DHHS) has relaxed the requirement to use HIPAA-secured platforms for reimbursements [ , , ]. This facilitates utilization of convenient, accessible, low cost, and commonly used applications such as FaceTime, Skype, and Google Hangouts, excluding public-facing platforms such as Facebook Live [ , , ]. Furthermore, the Centers for Medicare and Medicaid Services (CMS) have implemented copay waivers and made reimbursements for telemedicine visits comparable to in-person visits . DHHS has also relaxed paperwork, reporting, and audit requirements and CMS has removed the restriction that required practitioners be licensed in the state where they are providing services . These changes, coupled with laxity in liability laws on the federal and state levels have empowered physicians to utilize telemedicine more freely. With regards to billing and coding, Pollock et al. describe four types of billable services: telehealth and telemedicine services, telephone services, virtual check-ins, and E-visits/digital online services . Coding and billing remains outside the scope of this paper, however it is important to highlight the different categories of virtual services available that may be implemented in practice . In the most intuitive form of a virtual visit, a patient and provider would interact in real-time via a platform that includes both audio and visual components . This service has been equated to an in-person visit in terms of CMS reimbursements during the COVID-19 pandemic . Telephone services, namely phone calls, were not previously covered under Medicare; however, recent changes have allowed telephone calls with new and established patients to be billed under specific codes . Virtual check-ins are asynchronous methods communication, and are also billable forms of service . In a virtual check-in, audiovisual information, such as a recording or image, is forwarded to a provider who reviews the information and responds at a later time (within 24 business hours) . Finally, E-visits describe usage of digital forms of communication such as the electronic health record (EHR) or email . These are generally not considered telemedicine services and are not billed as such . summarizes a comparison of the applications and limitations between various otolaryngology subspecialties. demonstrates the methods of article selection for the purposes of this review. Although many states have begun to record declines in infection rates, many others remain at the apex of their curves. Additionally, the risk of future waves of this pandemic, or the onset of another pandemic, should not be overlooked. Practice modification guidelines that mitigate infection risk by utilizing telemedicine would be useful in these instances. These guidelines would ideally be enacted locally in regions with high infection rates or during future waves. Some of the practice modifications adopted during this pandemic were meant to be temporary mitigation strategies and are unlikely to remain in place long-term. The use of telemedicine, however, not only has a role in the post-COVID-19 era, but also represents a likely future within medicine, particularly within otolaryngology due to the high risk posed by this specialty. Incorporating telemedicine into the infrastructure of patient care will ensure a more viable and robust system that can withstand future global pandemics, or more likely, future “waves” of this current one . The future utilization of telemedicine could also be a dynamic process, implemented in locations that are emerging hot spots or at risk of local outbreak to limit spread of contagion. Implementation of specific treatment algorithms and incorporation of workflow systems that integrate telemedicine is key to transitioning into a viable and sustainable post-COVID-19 patient care model.
Conclusions At the onset of the COVID-19 global pandemic, new policies were quickly enacted in an effort to conserve personal protection equipment (PPE), increase capacity in healthcare facilities, limit exposure of healthcare workers, and reduce virus transmission rates. Telemedicine services have risen to accommodate the need for continued patient care while allowing observance of social distancing practices. Now as many states initiate re-opening, and others see an increase in the infection rate, steps to mitigate unnecessary exposure are just as necessary now as they were during the beginning of the pandemic. The use of telemedicine remains essential in the post-COVID-19 era, representing a likely future within otolaryngology. Incorporating telemedicine into the infrastructure of patient care will ensure a more viable and robust system that can withstand future global pandemics, or future “waves” of this current one. Telemedicine may also be used more frequently in locations that are emerging hot spots or at risk of local outbreak to limit spread of contagion. Implementation of specific treatment algorithms and workflow systems that integrate telemedicine is key to transitioning into a viable and sustainable post-COVID-19 patient care model.
None.
Ruwaa Samarrai, MD: Primary Author [email protected] Aaliyah C. Riccardi BS: Secondary Author [email protected] Belachew Tessema MD: Expert Reviewer [email protected] Michael Setzen MD: Expert Reviewer [email protected] Seth M. Brown MD, MBA: Principal Investigator [email protected]
None.
|
Experience with single transscrotal orchidopexy for palpable cryptorchidism in Vietnamese children | 2fa04b01-9069-4140-a617-72e373ab21db | 11950240 | Surgery[mh] | Cryptorchidism (undescended testes) is a common congenital anomaly with an incidence of 1–2% at 3 months. Of these 80–90% are palpable and 10–20% non-palpable. Palpable cryptorchidism is typically approached via a two incision orchiopexy with successful placement in the scrotum in 89 – 100% of patients . Bianchi introduced the single-incision transscrotal orchidopexy in 1989; however, this approach has not yet gained widespread acceptance. The single incision orchidopexy carries many potential advantages such as no disruption of groin anatomy, less post-operative pain and a faster operative time – . This technique has not been widely adopted, primarily due to concerns regarding access for hernia sac excision, insufficient length of the spermatic cord, and adherence to traditional, well-established techniques. Therefore, we conducted this study , . The objective of this study is to evaluate the short-term outcomes of this method in the treatment of palpable low undescended testes in children.
This prospective cohort study was carried out between October 2022 and February 2023 at the Dept. of Surgery, Children’s Hospital 2, Ho Chi Minh City, Vietnam. Patients attending the day ward for elective orchidopexy were included. Inclusion criteria All patients were under 16 years and were fully consented to use this approach as a new departure from what had been standard surgery at our hospital. The primary outcome of the study was the surgical success rate, while secondary outcomes included operative time and patient satisfaction. Patients were diagnosed with palpable low-lying cryptorchidism if they met the following criteria: + The patient had no testis in the scrotum since birth. + The testis was palpable and located along the pathway from the inguinal canal to the scrotum. + When the spermatic cord was stretched, the lowest position of the testis reached the scrotum. + Upon release of the stretched spermatic cord, the testis retracted above the scrotum immediately. Inguinal cryptorchidism was diagnosed when the testis was located above the pubic tubercle, while ectopic testis was diagnosed when the testis was located below the pubic tubercle. Clinical inguinal hernia was defined based on observations by the parents and/or the surgeon, characterized by a painless, reducible groin swelling. Patent processus vaginalis (PPV) was confirmed when probing with forceps or a Kelly clamp revealed communication between the tunica vaginalis and the peritoneal cavity. Ligation of the processus vaginalis was performed at the level of the deep inguinal ring, close to the preperitoneal fat or the inferior epigastric artery. Testicular volume measurement: The volume of all testes was measured by ultrasound using the formula: Testicular Volume = Length (L) × Width (W) 2 × 0.52. This formula was applied according to the study by Tseng . Final parental satisfaction outcome was assessed at 6 months post-operation using a simple questionnaire. Parents rated their satisfaction based on their subjective perception of the surgical scar visibility with the options: “very satisfied,” “satisfied,” or “not satisfied.” Exclusion criteria Any patient with a history of previous inguinal or scrotal surgery, congenital anomaly such as bifid scrotum, penoscrotal transposition, or proximal hypospadias, high-positioned testes or if the patient was lost to follow up. Testicular volume was measured by ultrasound prior to surgery and again at 6 months follow up. The Wilcoxon rank Test was used to compare volumes and considered significant when the volume had decreased by 50% in the post-operative period. We used SPSS software version 2000 for the data analysis. Potential biases include single-center recruitment bias and measurement bias associated with operator-dependent ultrasound assessments. Brief description of the technique An incision 1.5–2 cm in the uppermost fold between the scrotum and groin region is made toward the external ring, allow to secure the external spermatic pouch, the gubernaculum dissection continues to mobilize the pouch and resect all the restriction bands (Fig. A). A small proximal retractor was used to provide more exposure and ligate the processus vaginalis (if present). Occasionally a small incision was needed in the ext. oblique at the deep ring/inguinal canal (0.5–1 cm on the front wall of inguinal canal) especially with testes that retracted back to this area (Fig. B, ). All the dissection to get the competent length for tension free placement of the testes within the scrotum is shown in (Fig. D, ). We create a dartos pouch by using two Kelly clamps to grasp the dartos layer at the lower edge of the incision. Then, scissors are used to separate the skin from the dartos layer, forming the pouch. The testis is brought down into the scrotum by passing it through the dartos layer and securing it with a Vicryl suture between the lower pole of the testis and the base of the pouch (Fig. F). All patients were performed in the day surgery department by only one team, and were followed up at 1 week, 1, 3, and 6 months. All positions of the testis after orchidopexy were defined: right desired position (low or middle of the scrotum), not good (upper half scrotum or outside scrotum after 6 months follow-up). The volume of all testes were measured by ultrasound . Testicular atrophy was judged to have occurred if the volume was less than half the pre-operation size. The final parental satisfaction outcome was graded at 6 months post-operation as follows: very pleased, pleased and unhappy. Surgery was considered a success when there were no complications, the testis was in an acceptable scrotal position and there was no measurable atrophy at 6 months follow-up. The research process is illustrated in Fig. . This study was approved by the Ethics Committee of Children’s Hospital 2, Vietnam, under approval number 813/GCN-BVND2. All experiments involving human subjects were conducted in accordance with relevant guidelines and regulations. Informed consent is obtained from the parent and/or legal guardian for study participation.
All patients were under 16 years and were fully consented to use this approach as a new departure from what had been standard surgery at our hospital. The primary outcome of the study was the surgical success rate, while secondary outcomes included operative time and patient satisfaction. Patients were diagnosed with palpable low-lying cryptorchidism if they met the following criteria: + The patient had no testis in the scrotum since birth. + The testis was palpable and located along the pathway from the inguinal canal to the scrotum. + When the spermatic cord was stretched, the lowest position of the testis reached the scrotum. + Upon release of the stretched spermatic cord, the testis retracted above the scrotum immediately. Inguinal cryptorchidism was diagnosed when the testis was located above the pubic tubercle, while ectopic testis was diagnosed when the testis was located below the pubic tubercle. Clinical inguinal hernia was defined based on observations by the parents and/or the surgeon, characterized by a painless, reducible groin swelling. Patent processus vaginalis (PPV) was confirmed when probing with forceps or a Kelly clamp revealed communication between the tunica vaginalis and the peritoneal cavity. Ligation of the processus vaginalis was performed at the level of the deep inguinal ring, close to the preperitoneal fat or the inferior epigastric artery. Testicular volume measurement: The volume of all testes was measured by ultrasound using the formula: Testicular Volume = Length (L) × Width (W) 2 × 0.52. This formula was applied according to the study by Tseng . Final parental satisfaction outcome was assessed at 6 months post-operation using a simple questionnaire. Parents rated their satisfaction based on their subjective perception of the surgical scar visibility with the options: “very satisfied,” “satisfied,” or “not satisfied.”
Any patient with a history of previous inguinal or scrotal surgery, congenital anomaly such as bifid scrotum, penoscrotal transposition, or proximal hypospadias, high-positioned testes or if the patient was lost to follow up. Testicular volume was measured by ultrasound prior to surgery and again at 6 months follow up. The Wilcoxon rank Test was used to compare volumes and considered significant when the volume had decreased by 50% in the post-operative period. We used SPSS software version 2000 for the data analysis. Potential biases include single-center recruitment bias and measurement bias associated with operator-dependent ultrasound assessments.
An incision 1.5–2 cm in the uppermost fold between the scrotum and groin region is made toward the external ring, allow to secure the external spermatic pouch, the gubernaculum dissection continues to mobilize the pouch and resect all the restriction bands (Fig. A). A small proximal retractor was used to provide more exposure and ligate the processus vaginalis (if present). Occasionally a small incision was needed in the ext. oblique at the deep ring/inguinal canal (0.5–1 cm on the front wall of inguinal canal) especially with testes that retracted back to this area (Fig. B, ). All the dissection to get the competent length for tension free placement of the testes within the scrotum is shown in (Fig. D, ). We create a dartos pouch by using two Kelly clamps to grasp the dartos layer at the lower edge of the incision. Then, scissors are used to separate the skin from the dartos layer, forming the pouch. The testis is brought down into the scrotum by passing it through the dartos layer and securing it with a Vicryl suture between the lower pole of the testis and the base of the pouch (Fig. F). All patients were performed in the day surgery department by only one team, and were followed up at 1 week, 1, 3, and 6 months. All positions of the testis after orchidopexy were defined: right desired position (low or middle of the scrotum), not good (upper half scrotum or outside scrotum after 6 months follow-up). The volume of all testes were measured by ultrasound . Testicular atrophy was judged to have occurred if the volume was less than half the pre-operation size. The final parental satisfaction outcome was graded at 6 months post-operation as follows: very pleased, pleased and unhappy. Surgery was considered a success when there were no complications, the testis was in an acceptable scrotal position and there was no measurable atrophy at 6 months follow-up. The research process is illustrated in Fig. . This study was approved by the Ethics Committee of Children’s Hospital 2, Vietnam, under approval number 813/GCN-BVND2. All experiments involving human subjects were conducted in accordance with relevant guidelines and regulations. Informed consent is obtained from the parent and/or legal guardian for study participation.
Fifty three palpable low-lying cryptorchidism from 47 patients were eligible for our study, 22 cases on the right side, 19 cases on the left side, and six bilateral. Median age was 26.6 months (9–147 months). The testes were found to be in the inguinal canal in 12 cases, outside the inguinal canal in 41 cases, without clinical hernia symptoms in 43 cases, and with an obvious clinical hernia in 10 (Table ). The median operation time was 25 min (20–40 min), a patent processus vaginalis was found in 50 cases, but three had no processus vaginalis (5.7%). All patent processus vaginalis were ligated as high as possible or close to the deep inguinal ring. The median follow-up time was 7 months (5–7 months). No patients sustained a wound infection but seven experienced mild scrotal edema which resolved after 1–2 weeks. An excellent scrotal position was obtained in 52 patients but in one the testis was a little high but still acceptable. Cosmetic satisfaction gained in 100%, 52 cases (98%) very pleased with the wound, many cases seemed invisible. No patients experienced surgical site infections; however, seven cases developed mild scrotal edema, which resolved spontaneously within 1–2 weeks without any intervention. Testicular volume was measured 6 months after surgery. An increase in testicular volume was observed after orchidopexy, with a mean difference of 78.5 mm3 between preoperative and postoperative measurements (95% confidence interval: 10.9–146.2). The difference is statistically significant with p < 0.05 (Table ). No case testicular volume decreased below 50% compared to pre-operation. The success of this surgery was 98%.
Our study’s age is rather higher than normal, with 32% of patients older than 3 years. It has been suggested that orchidopexy in older children is less successful and that orchidopexy should be performed before the age of 2 years , , . We had no difficulties in doing this technique in older patients, an experience shared by others , . While it may be controversial it appears that the age of surgery does not affect the surgical outcome, but older age may affect function. We might need to look at fertility and spermatogenesis above 1 or 2 years here. Our patients are older because of social and cultural factors specific to Vietnam, and this demographic may take some time to change. Talabi insisted the age at the time of surgery is not the major factor to affect the outcome of the surgery. Some authors take this single transscrotal incision orchidopexy as the preferable approach in obese patients avoiding longer inguinal incisions . We believe this to be an important consideration as in obese boys the dissection required is greater and more difficult and the risk of iatrogenic torsion is higher in a two-incision approach . Choosing a high scrotal skin incision and creating a dartos pouch after ligation and division of the processus vaginalis helps minimize surgical field obstruction by adipose tissue. In our 53 cases, 22.6% testes were in the inguinal canal, but this did not limit the ultimate result of a scrotal testis in 52 of 53 procedures. In our study we noted that when the testis can be brought close to the scrotal neck (under general anesthesia) that a scrotal approach is a viable option – . We think that when the testis is palpable and can be milked out of the ring, then it is suitable for a scrotal approach. Misra advised that the high scrotal orchidopexy approach should not be applied in cases in which there is an obvious patent processus vaginalis. We don’t agree with this opinion and the presence of a hernia or an obvious patent processus is not a contraindication to this approach as 43 of our patients had this anatomy with no impact on the approach or the outcome , . We believe that the sac or PPV can be ligated at the normal position in the majority of children via a single scrotal incision. There is no statistical difference in surgical time between the group with patent processus vaginalis and without it, similar to Takahashi . We believe that proactive division of the anterior wall of the inguinal canal in all cases is key to successful ligation of the processus vaginalis, especially in cases with concurrent inguinal hernia. Although not within the scope of our study, the scrotal incision may enhance cosmetic outcomes when used for inguinal hernia repair in children, particularly in cases of bilateral hernias. However, it may increase the risk of surgical site infection and does not demonstrate a significant difference in operative time compared to the traditional inguinal approach . Additionally, the recurrence rate of inguinal hernia is 0.67%, and there have been reports of acquired undescended testes following surgery . We believe that inguinal hernia repair via the scrotal approach is feasible; however, selecting the appropriate incision (median raphe, mid-scrotal, or inguinoscrotal) and proactively dividing the anterior wall of the inguinal canal for high ligation of the processus vaginalis is crucial to minimize postoperative hernia recurrence. Testicular size may increase or decrease postoperatively , , but it generally shows a significant increase compared to preoperative measurements after an average follow-up of 2.5 years . Therefore, the short postoperative follow-up period may explain the cases of decreased testicular size in our study. Moreover, aside from operator dependency, ultrasound image quality obtained in the inguinal region may differ from that of the scrotal region. Additionally, our testicular volume measurement relied only on the length and width dimensions from ultrasound imaging, which may introduce measurement errors. Although subjective and focused solely on aesthetics, all families were satisfied with the appearance of the surgical scar. The aesthetic aspect of the scrotal incision is considered one of the advantages of the procedure, providing high cosmetic value and reducing potential psychological distress for the patient . While various factors are cited that explain the outcome of successful orchidopexy (scrotal position with no volume decrease) it is possible that mobility of the testis while under general anesthesia is the most predictable variable and as such the approach (one or two incisions) may not impact on outcome , . In this regard we have shown an excellent outcome (98% success) via a single incision with a satisfactory position, no loss of testicular volume and a desirable cosmetic result. We saw no significant complications apart from mild scrotal edema and none of our patients required a second incision. This supports previous reports on the success of the single incision (Table ). Our study is limited by relatively small numbers but is important as it introduces a new concept and procedure in Vietnam. Our study has several limitations The study does not represent all cases of palpable cryptorchidism. It does not answer the question of what percentage of cases of palpable cryptorchidism can be treated with single transscrotal orchidopexy. Due to the non-comparative design, the study does not provide a conclusion regarding the efficacy of single transscrotal orchidopexy in comparison to other techniques. The follow-up period was too short to allow for a conclusive assessment of the long-term effects of single transscrotal orchidopexy on patients who underwent the procedure.
The study does not represent all cases of palpable cryptorchidism. It does not answer the question of what percentage of cases of palpable cryptorchidism can be treated with single transscrotal orchidopexy. Due to the non-comparative design, the study does not provide a conclusion regarding the efficacy of single transscrotal orchidopexy in comparison to other techniques. The follow-up period was too short to allow for a conclusive assessment of the long-term effects of single transscrotal orchidopexy on patients who underwent the procedure.
We believe that single transscrotal incision orchidopexy is a beneficial technique for treating palpable testes that can be well milked into the junction of the scrotum and groin under general anesthesia. This approach offers clinical advantages, including shorter operative time and reduced postoperative discomfort. However, we recognize that our sample size is relatively small, which may limit the generalizability of our findings. Further research with larger sample sizes and long-term follow-up is needed to confirm these results and strengthen their applicability in broader clinical practice. Our prospective study has demonstrated excellent postoperative testicular position, no decrease in volume, and satisfactory cosmetic outcomes.
Supplementary Information.
|
An ultrasound-guided modified iliac fascia and sacral plexus block application in a critically ill patient undergoing artificial femoral head replacement surgery: a case report | 59b84c59-cdea-4487-8b66-9f2c7d0b246d | 11841137 | Surgical Procedures, Operative[mh] | General anesthesia and intrathecal anesthesia are the two main anesthetic options for artificial femoral head replacement surgery . While ultrasound-guided nerve block has emerged as a well-established technique for postoperative analgesia , its efficacy as a sole anesthetic for the entire procedure remains to be fully explored through clinical practice. In comparison to general anesthesia, ultrasound-guided nerve block ensures stable hemodynamics during surgery owing to its minimal impact on the circulatory and respiratory systems. Furthermore, when juxtaposed with intrathecal anesthesia, ultrasound-guided nerve block diminishes the necessity for coagulation management . Additionally, nerve blocks offer superior analgesic effects, heightened patient satisfaction , fewer hemodynamic fluctuations , and reduced perioperative complications . Considering these advantages, ultrasound-guided nerve block emerges as a compelling alternative anesthetic technique, especially for patients confronting challenging circumstances, such as severe underlying conditions and multiple comorbidities. An 88-year-old male patient presented with a left femoral fracture (Fig. ), requiring an artificial femoral head replacement surgery. It took more than 20 days until the patient was admitted to a hospital, due to initial rejection by other hospitals. In the setting of an extensive medical history including controlled hypertension, coronary artery disease, old cerebral infarction, and Alzheimer’s disease (medication before hospitalization in Table ), the delayed hospitalization exacerbated the patient’s condition significantly. Upon presentation, the patient exhibited signs of heart failure, with an elevated brain natriuretic peptide (BNP) level of 878.0 pg/mL, as well as severe hypoalbuminemia with an albumin level of 27.2 g/L. Meanwhile, imaging revealed a new large area cerebral infarction (Fig. ), severe pulmonary infection (Fig. ), bilateral pleural effusion (Fig. ), and coagulation disorder, characterized by an activated partial thromboplastin time (APTT) of 36.9s and a D-Dimer level of 2.97ug/ml. Notably, the patient failed to recover from pneumonia, progressing rapidly to acute respiratory distress syndrome with a peripheral capillary oxygen saturation (SpO2) of 88% despite that the elder patient used a regular nasal cannula receiving oxygen therapy through the hospital’s central oxygen supply system at a flow rate of 3 ml/min. Prolonged immobilization due to the fracture contributed to the development of hypostatic pneumonia and other complications, impeding his overall recovery. Following a comprehensive assessment, the patient was classified as frail according to the Fatigue, Resistance, Ambulation, Illness, and Loss of Weight (FRAIL) Index , and was assigned New York Heart Association (NYHA) functional class IV status. Surgical treatment was the best choice for the patient after multidisciplinary consultations. Upon admission into the operating room, the elderly patient was applied standard monitoring . His anthropometric measurements revealed a height of 165 cm and a weight of 65 kg, resulting in a body mass index (BMI) of 23.87 kg/cm^2. Baseline vitals were as follows: a heart rate of 72 beats per minute (bpm), arterial blood pressure reading at 239/76 mmHg, and a SpO2 of 93% on ambient air. Supplemental oxygen at a rate of 5 L per minute via nasal catheter was subsequently administered, resulting in the attainment of a SpO 2 level of 100%. Following initial stabilization, central intravenous and radial arterial lines were established, followed by a comprehensive blood gas analysis which revealed arterial partial pressure of oxygen (PO 2 ) at 132 mmHg, partial pressure of carbon dioxide (PCO 2 ) at 45.6 mmHg, and hemoglobin (Hb) concentration of 98 g/L, with no other significant abnormalities noted. After adequate preparations, we decided to implement a modified iliac fascia and sacral plexus block directly under ultrasound vision (Mindray Bio-medical electronics Co., Ltd., Model: ME 8P). The patient was positioned supine with the affected lower limb slightly abducted. High-frequency (11.0 MHz) linear ultrasound probe was placed parallel to the inguinal ligament, and high echogenicity of the fascia lata, iliac fascia, and iliopsoas muscle could be seen under ultrasound (Fig. ). Subsequently, a mixture of 0.25% ropivacaine and 1% lidocaine (including 50 mg ropivacaine and 200 mg lidocaine) totaling 20 mL was injected into the designated anatomical gap using a short-axis out-of-plane technique, targeting the femoral nerve within the lateral sheath under the iliac fascia, with injection directed cephalad. Following this, the patient was repositioned laterally. The convex array probe (6.0 MHz) was placed at the level of the lower edge of the medial 1/2 part of the line between the midpoint of the greater trochanter of the femur and the posterior superior iliac spine and the line of the posterior superior iliac spine, at which point the ultrasound image was linear and hyperechoic (for the iliac bone) (Fig. ). We could gradually see the greater sciatic foramen in which the sacral plexus nerve was highlighted with the probe sliding inward and downward. The injection point was on the tail side near the greater sciatic foramen above the iliac bone, with the needle inserted out of the short axis plane and the needle tip beveled downwards. The same concentration and dosage of local anesthetics were administered via the described technique. Fifteen minutes post-administration, sensory and motor blockade of the targeted nerve territories were confirmed through cutaneous stimulation, without observed adverse effects. Subsequently, the elderly patient was induced into a state of sedoanalgesia with a loading dose of dexmedetomidine (1 µg/kg) administered for 10 min, followed by a maintenance infusion at a rate of 0.4 µg/kg/h, in conjunction with propofol at 1 mg/kg/h to maintain bispectral index (BIS) at approximately 60. Surgical intervention commenced five minutes after the attainment of unconsciousness. Throughout the whole surgical procedure, the patient received intravenous fluids totaling 600mL of crystalloid solution, including 500mL of sodium lactate Ringer’s solution (Baxter, Baxter Healthcare Co., Ltd.) and 100mL of 0.9% Sodium Chloride, 500 ml of hydroxyethyl starch (Voluven, Fresenius Kabi Pharmaceutical Co., Ltd.), 2 units of plasma, and 50 ml of autologous blood transfusion based on the patient’s preoperative debility. Intraoperative vital signs remained within satisfactory limits, with the heart rate predominantly ranging between 70 and 80 bpm, invasive blood pressure averaging around 180/80mmHg, SpO 2 maintained between 99 and 100% with supplemental oxygen and respiratory rate recorded at 10–12 breaths per minute. Overall, intraoperative hemodynamic stability was achieved. The surgical procedure concluded successfully after approximately 1.5 h under the aforementioned stable conditions, following which the patient was transferred to the post-anesthesia care unit (PACU). Within three minutes in PACU, the patient regained consciousness in a calm and orderly manner facilitated by reasonable pharmacological management during the surgical intervention. Postoperative blood gas analysis demonstrated arterial PO 2 at 228mmHg, PCO 2 at 38.7mmHg, and Hb concentration at 77 g/L. Furthermore, the patient was provided with a patient-controlled analgesia pump delivering a solution comprising 100 µg of sufentanil and 10 mg of tropisetron, with a background infusion rate of 1.5mL, a single bolus dose of 1.5mL, and a 15-minute lockout interval. The elder patient’s consciousness level could basically restore to the preoperative state, with the GCS score measured as 3 points for Eye opening, 4 points for Verbal response, 5 points for Motor response, totaling 12 points, classified as mild, based on which a postoperative Visual Analog Scale (VAS) score of 2 points was recorded eight hours after the surgical procedure. Ultimately, the patient demonstrated accelerated postoperative recovery without significant discomfort, leading to a successful discharge. With the trend of aging population leading to a significant increase in elderly critically ill patients, anesthesiologists will face unprecedented challenges. Traditional general anesthesia or intrathecal anesthesia is often not the optimal anesthesia strategy for such patients. Therefore, it is necessary to achieve diversification of anesthesia technology to adapt to the continuous increase in critically ill patients and the increasing maturity of surgical techniques. In this case, the elderly patient presented with pronounced frailty compounded by a spectrum of comorbidities including hypertension, coronary artery disease, acute heart failure, old cerebral infarction, Alzheimer’s disease, and coagulopathy. Traditional anesthetics and analgesics frequently utilized in general anesthesia possess potent cardiodepressive properties, potentially leading to significant hemodynamic fluctuations. Additionally, tracheal intubation inherently elicits airway irritation, exacerbating pre-existing pulmonary conditions and complicating extubation maneuvers. Moreover, intrathecal anesthesia was contraindicated due to his coagulation disorder. These collective factors directly or indirectly imperil the lives of elderly patients. Fortunately, nerve block anesthesia offers a viable alternative, circumventing such risks adeptly. Nerve block facilitates profound and prolonged anesthesia with minimal doses of local anesthetics, ensuring preserved spontaneous respiration. Furthermore, its analgesic efficacy may endure for up to 8–10 h postoperatively , markedly reducing reliance on opioid analgesia and its associated adverse effects. A retrospective study has highlighted that regional anesthesia was associated with a modestly shorter length-of-stay compared with general anesthesia . Employing ultrasound-guided technology in nerve blockade substantially mitigates the potential for tissue trauma and local anesthetic mistakenly entering blood vessels, thus enhancing procedural reliability and safety . Previously, nerve block was used more intraoperatively combined with general anesthesia, aimed at dose reduction of opioid medications or postoperative analgesia . However, emerging literature suggests the feasibility of employing simple nerve blocks for selected procedures . We implemented an ultrasound-guided modified iliac fascia block combined with sacral plexus instead of traditional iliac fascia block , which is firstly reported to apply in such elder critically ill patients. The innervation of the hip joint primarily stems from the ventral rami of the lower lumbar plexus (L2-L4) and upper sacral plexus (L4-S1) spinal nerve roots . Key nerves supplying the hip joint include the femoral and obturator nerves from the lumbar plexus, and the lumbosacral trunk (via the sciatic and superior gluteal nerves) from the sacral plexus . Traditional iliac fascia block is comprised of high and low iliac fascia block. The high iliac fascia block targets nerves above the inguinal ligament, effectively blocking the femoral, obturator, and external cutaneous nerves. Conversely, the low iliac fascia block, directed below the inguinal ligament, often yields incomplete blockade, affecting solely the femoral and obturator nerves. In our approach, we placed high-frequency linear ultrasound probe parallel to the inguinal ligament to see high echogenicity of the fascia lata, iliac fascia, and iliopsoas muscle with the patient supine position and the lower limb on the operated side slightly abducted. Subsequently, we administered a 20 ml combination of 0.25% ropivacaine and 1% lidocaine into the designated gap using a short-axis out-of-plane technique. The needle insertion was directed under the iliac fascia within the lateral sheath of the femoral nerve, with injection directed cephalad. Diverging from the traditional method, we positioned the injection lower and employed a lower concentration (0.25% ropivacaine combined with 1% lidocaine vs. 0.35% ropivacaine) and reduced dosage (20 mL vs. 40 mL). A randomized controlled trial summarized that local anesthetics with lower concentrations may has distinct advantages. Considering the critical condition of the elderly patient, the adoption of such low-concentration local anesthetics ensures anesthesia safety while affording effective pain relief . Therefore, the approach in our case, marked by lower concentration and dosage, holds better promise for expediting recovery and aligning with Enhanced Recovery After Surgery (ERAS) principles. This nuanced approach underscores the imperative of tailoring anesthesia protocols to suit individual patient profiles, particularly in the context of complex medical comorbidities. In this case, we successfully implemented an ultrasound-guided modified iliac fascia combined sacral plexus block with sedation to facilitate surgical completion in a critically ill patient. The utilization of the modified iliac fascia block yielded comparable anesthesia and analgesic outcomes to the traditional iliac fascia block on a safer basis. Notably, ultrasound-guided nerve blocks offer a promising avenue for critically ill individuals unsuited for general or intrathecal anesthesia, thereby extending newfound therapeutic potential to this patient cohort. |
Expanding trauma education during war: pediatric trauma fundamentals training in Ukraine | 7457d210-d966-4b27-a502-c344fb75e6ee | 11413807 | Pediatrics[mh] | On 24 February 2022, Russia expanded its war in Ukraine by launching a large-scale offensive across the country. Over the past 2 years, the conflict has devastated communities in Ukraine, leading to over 10,500 civilian deaths and almost 20,000 injured . This includes over 600 children killed and 1,350 injured . The ongoing conflict has caused immediate casualties while also leading to a significant and profound impact on public health and hospital infrastructure. Over 1,700 attacks on Ukraine’s health system have led to numerous medical facilities being damaged or destroyed, hundreds of health care workers killed , and significant disruptions of critical utilities supporting hospital functionality, including energy and water supply systems . The prolonged nature of the conflict has also led to a critical shortage in health and medical supplies, overburdened health care workers, public health emergencies, and a significant mental health burden on the Ukrainian civilian population. The escalation of the conflict has increased the need for trauma and emergency care throughout Ukraine, which has seen an exponential increase in war-related injuries from mechanisms such as penetrating trauma, burns, crush and blast injuries . The health consequence of these mechanisms of injury include complex traumatic injuries requiring immediate stabilization, advanced surgical interventions, rehabilitation, and comprehensive long-term care . In addition to the profound effect on hospitals, clinics, and medical supply chains, medical providers have faced unprecedented challenges to providing care, including the disruption of medical education and the transition of currently practicing providers toward trauma-based care and education. The Harvard Humanitarian Initiative (HHI), a university-based interfaculty initiative, has partnered with organizations, agencies, and ministries of health to support humanitarian responses around the world . Building from prior relationships delivering Basic Emergency Care and Chemical, Biological, Radiological, Nuclear, and Explosives (CBRNE) courses in Ukraine after Russia’s annexation of Crimea in 2014 , HHI undertook a rapid needs assessment shortly after the 2022 Russian invasion to understand the trauma-related education required to meet the acute care needs across the country. HHI, in partnership with the International Medical Corps, developed a multi-stream trauma training initiative to provide Ukrainian health care workers, public safety officials, and civilians with training in trauma management . After a successful preliminary implementation of the multi-stream intervention and in response to feedback, a stand-alone pediatrics trauma course was developed to implement both at the country’s freestanding childrens’ hospitals and in general hospitals that receive pediatric patients. Given that the participants of the course would either have a strong background in pediatrics or basic knowledge in adult trauma care with minimal pediatrics knowledge, we undertook the development of a Pediatric Trauma Fundamentals (PTF) course. We developed this course to fill the gap in pediatric-focused trauma education in wartime or conflict settings, while also ensuring a highly contextually appropriate curriculum tailored to the Ukrainian context. The two-day trauma training curriculum was planned and developed with input and feedback from Ukrainian partners. The overall objectives of the program included: (1) the development and implementation of a pediatric trauma fundamentals course to provide immediate training for health care providers in Ukraine and (2) the sustainable integration of the program into Ukrainian educational initiatives. The team sought to assess the effectiveness of the PTF educational implementation program on trauma theoretical knowledge, skills, and practical implementation during an active armed conflict in Ukraine. In the Summer of 2022, prior to the conclusion of the first phase of an overarching multi-stream trauma program in Ukraine, a general consensus between partner organizations (International Medical Corps and HHI) was made that a pediatric-specific trauma fundamentals course should be added as a stand-alone trauma education stream. A majority of pediatric trauma care takes place in a network of Ministry of Health-run freestanding childrens hospitals not previously targeted in the first phase of the multi-stream trauma program. Additionally, frontline regions with general hospitals that receive pediatric patients, in cities such as Mykolaiv, were included in the PTF trauma initiative. Course curriculum A novel two-day course entitled ‘Pediatric Trauma Fundamentals’ was conceptualized and developed internally by a core group of seven pediatric emergency medicine physicians and nurses. The core team had previous experience in curriculum development and had extensive experience in the humanitarian sector through engagement with various academic and international agencies and organizations. Content was sourced from several international resources including the World Health Organization (WHO) and the International Committee of the Red Cross Basic Emergency Care, Advanced Trauma Life Support, UpToDate, Fleisher and Ludwig’s Textbook of Pediatric Emergency Medicine, and the Boston Children’s Global Health Program pediatric resuscitation course . Curriculum topics & modules are listed in . Educational delivery modalities included didactic frontal lectures, hands-on skills stations, interactive case discussions, and team-based simulation scenarios. All materials were translated into Ukrainian and reviewed by International Medical Corps interpreters based in Ukraine for language, context and culturally specific considerations. The two-day schedule was adapted as needed for safety/security considerations, which included tailoring course start and end times based on travel requirements and daily safety/security briefs providing real time information about impending attacks. Supplementary PTF videos were developed for high-yield topics in pediatric trauma. Links were provided for students during the course and made publicly available on YouTube . A second curriculum was developed for the “training of trainers” (ToT) component of the course. In addition to the two-day PTF course, three additional days for a five-day course included one day on adult learning and teaching theory and two days on flipped classroom, student-driven didactic, skills and simulation practice. Course delivery Courses were delivered in person by international English-speaking instructors and Ukrainian instructors in Ukrainian language, both with live Ukrainian/English bi-directional translation for the duration of all courses. The PTF course was implemented in a three-part approach: (1) international instructor led PTF courses, (2) ToT courses which developed a cohort of Ukrainian instructors to teach PTF to Ukrainian participants, and (3) Ukrainian instructor-led courses with international mentorship. During the initial implementation of the PTF intervention (November 2022 to April 2023), English speaking international instructors provided in person instruction to Ukrainian participants. International instructors were recruited by HHI and International Medical Corps. Given the safety and security considerations in an active conflict zone, and to limit the number of unique providers teaching courses, international instructors were obligated to dedicate two-week blocks of course delivery during the PTF intervention. Recruitment for Ukrainian learner participants was undertaken by International Medical Corps and sought out the following priority medical providers as course participants: pediatricians, pediatric surgeons (all specialties), general practitioners, general surgeons, and any other provider who cares for traumatically injured children. During part two of the PTF implementation (August 2023 to December 2023), Ukrainian participants were identified to attend a five-day ToT course to become instructors. Ukrainian instructors were identified by recommendation from Ukrainian host universities. ToT courses were taught by in person international instructors. PTF courses thereafter were led by Ukrainian instructors with international instructors onsite to provide active mentorship, content expertise and educational delivery feedback. Training sites Training sites were identified by International Medical Corps and based on the identified needs of Ukrainian providers who care for pediatric patients. provides a map of locations of all 11 cities where PTF courses were delivered during the program. Course delivery was undertaken at Universities, hotels and hospitals. Program evaluation In alignment with the overarching multi-stream trauma program, the effectiveness of the intervention was assessed through several means. In-person course participation and video access statistics were tracked. Changes in knowledge and self-efficacy were measured individually through pre-and post-course written assessments and self-confidence surveys. Participants completed written evaluations immediately after finishing the course. This information was gathered on paper, transcribed into Kobo Toolbox, and analyzed with the R Studio statistical package . Follow-up evaluations conducted six to eight-weeks post-course measured skill adoption, implementation, and maintenance using participants’ preferred messaging platforms (Telegram, Signal, WhatsApp, or Viber). Knowledge changes were analyzed using paired t-tests, while pre-and post-course self-efficacy surveys were analyzed with McNemar’s test for paired data. Course evaluations included standardized questions about instruction quality, teaching relevance, knowledge gained, and post-course confidence in skills. Handwritten feedback was deidentified, collected in Ukrainian, and translated into English for analysis. A novel two-day course entitled ‘Pediatric Trauma Fundamentals’ was conceptualized and developed internally by a core group of seven pediatric emergency medicine physicians and nurses. The core team had previous experience in curriculum development and had extensive experience in the humanitarian sector through engagement with various academic and international agencies and organizations. Content was sourced from several international resources including the World Health Organization (WHO) and the International Committee of the Red Cross Basic Emergency Care, Advanced Trauma Life Support, UpToDate, Fleisher and Ludwig’s Textbook of Pediatric Emergency Medicine, and the Boston Children’s Global Health Program pediatric resuscitation course . Curriculum topics & modules are listed in . Educational delivery modalities included didactic frontal lectures, hands-on skills stations, interactive case discussions, and team-based simulation scenarios. All materials were translated into Ukrainian and reviewed by International Medical Corps interpreters based in Ukraine for language, context and culturally specific considerations. The two-day schedule was adapted as needed for safety/security considerations, which included tailoring course start and end times based on travel requirements and daily safety/security briefs providing real time information about impending attacks. Supplementary PTF videos were developed for high-yield topics in pediatric trauma. Links were provided for students during the course and made publicly available on YouTube . A second curriculum was developed for the “training of trainers” (ToT) component of the course. In addition to the two-day PTF course, three additional days for a five-day course included one day on adult learning and teaching theory and two days on flipped classroom, student-driven didactic, skills and simulation practice. Courses were delivered in person by international English-speaking instructors and Ukrainian instructors in Ukrainian language, both with live Ukrainian/English bi-directional translation for the duration of all courses. The PTF course was implemented in a three-part approach: (1) international instructor led PTF courses, (2) ToT courses which developed a cohort of Ukrainian instructors to teach PTF to Ukrainian participants, and (3) Ukrainian instructor-led courses with international mentorship. During the initial implementation of the PTF intervention (November 2022 to April 2023), English speaking international instructors provided in person instruction to Ukrainian participants. International instructors were recruited by HHI and International Medical Corps. Given the safety and security considerations in an active conflict zone, and to limit the number of unique providers teaching courses, international instructors were obligated to dedicate two-week blocks of course delivery during the PTF intervention. Recruitment for Ukrainian learner participants was undertaken by International Medical Corps and sought out the following priority medical providers as course participants: pediatricians, pediatric surgeons (all specialties), general practitioners, general surgeons, and any other provider who cares for traumatically injured children. During part two of the PTF implementation (August 2023 to December 2023), Ukrainian participants were identified to attend a five-day ToT course to become instructors. Ukrainian instructors were identified by recommendation from Ukrainian host universities. ToT courses were taught by in person international instructors. PTF courses thereafter were led by Ukrainian instructors with international instructors onsite to provide active mentorship, content expertise and educational delivery feedback. Training sites were identified by International Medical Corps and based on the identified needs of Ukrainian providers who care for pediatric patients. provides a map of locations of all 11 cities where PTF courses were delivered during the program. Course delivery was undertaken at Universities, hotels and hospitals. In alignment with the overarching multi-stream trauma program, the effectiveness of the intervention was assessed through several means. In-person course participation and video access statistics were tracked. Changes in knowledge and self-efficacy were measured individually through pre-and post-course written assessments and self-confidence surveys. Participants completed written evaluations immediately after finishing the course. This information was gathered on paper, transcribed into Kobo Toolbox, and analyzed with the R Studio statistical package . Follow-up evaluations conducted six to eight-weeks post-course measured skill adoption, implementation, and maintenance using participants’ preferred messaging platforms (Telegram, Signal, WhatsApp, or Viber). Knowledge changes were analyzed using paired t-tests, while pre-and post-course self-efficacy surveys were analyzed with McNemar’s test for paired data. Course evaluations included standardized questions about instruction quality, teaching relevance, knowledge gained, and post-course confidence in skills. Handwritten feedback was deidentified, collected in Ukrainian, and translated into English for analysis. PTF courses ran from November 2022 to December 2023. A total of 30 PTF courses were taught in the following 11 cities in 8 Oblasts over the total implementation period: Kyiv ( n = 3), Fastiv ( n = 1), Dnipro ( n = 5), Kharkiv ( n = 3), Chernihiv ( n = 2), Mykolaiv ( n = 4), Vinnytsia ( n = 3), Lviv ( n = 1), Stryi ( n = 1), Izmail ( n = 2), and Odesa ( n = 5). Overall, a total of 17 unique international instructors were deployed to teach PTF in Ukraine. All instructors underwent a pre-deployment orientation to the PTF course, logistics and safety/security briefing. A total of 446 Ukrainian participants were trained in PTF by international and Ukrainian instructors (85 trained by Ukrainian instructors and 361 by international instructors) and 63 Ukrainian participants completed the ToT course. Demographics of the PTF participants can be found in . A 25-question knowledge assessment pre-and post-test was developed to align with overarching PTF course objectives. Participant matched pre-and post-tests demonstrated a significant improvement in knowledge . Participant matched 21-question self-confidence and self-efficacy pre-and post-surveys were completed by PTF participants. Variance in total number of participants and number of participants with matched test results were due to multiple reasons including course incompletion by participants and data entry errors resulting in the inability to match participants. Results demonstrated a significant increase in all self-confidence and self-efficacy questions for participants trained by international and Ukrainian instructors. demonstrates the aggregate results of all PTF participants. provides disaggregated tables for each PTF time phase. A six to eight-week follow up evaluation was sent to course participants via preferred messaging platforms to understand post-course skills utilization and stewardship. Evaluations were sent to all course participants. 91/446 (20.4%) of PTF participants responded. Results of the responses can be found in . Over 73% of PTF participants reported teaching information learned in the course to others including trauma management knowledge and/or procedural skills. When asked if any additional training topics should be taught in the future, over 75% of respondents requested further educational opportunities in pediatric non-trauma emergency care. Participants filled out immediate post-course evaluations for international instructor-led PTF courses ( n = 376, 99.5% response rate; ) and the Ukrainian instructor-led PTF courses ( n = 122, 93.1% response rate; ). While both cohorts received overwhelmingly positive feedback, Ukrainian instructors received higher raw scores across all evaluation points as compared to the international instructors. During war and conflict, a significant shift in medical care is required to prioritize trauma and acute care injuries . This transition involves upskilling or task-shifting healthcare providers to handle the surge of traumatic injuries caused by unique wartime mechanisms of injury and for special populations, including pediatrics . Ukraine’s network of pediatric hospitals and academic institutional partners across the country provided a basic infrastructure and setting to undertake a large-scale, country-wide pediatric trauma educational initiative tailored to this population . Given the special focus of pediatrics, our unique pediatric trauma fundamentals course provided pediatric-focused education for providers tasked with caring for children during an active war setting in Ukraine. This educational course filled a gap in pediatric trauma education, as other established courses may only briefly address pediatric trauma education in overarching curricula focused on adult emergency and trauma care or focus on providing pediatric trauma education outside of the acute care setting. Furthermore, this course sought to provide a knowledge foundation both for providers with pediatric expertise but no trauma experience, and providers with significant trauma experience but minimal pediatric exposure. During the PTF conceptualization and development, it was clear that the course should be both applicable and tailored to medical providers with specific training and expertise in pediatrics, emergency practitioners, general practitioners and surgeons who may encounter pediatric trauma patients. To accommodate a spectrum of potential learners, the course considered pediatric differences in anatomy, physiology, pathophysiology and common presentations across the pediatrics spectrum of injury. Over the course of the educational intervention, 509 medical providers were trained in PTF across 11 cities in Ukraine. In addition, this included a cohort of 63 participants in the PTF ToT courses. These participants immediately started to implement independent PTF courses across several regions in Ukraine. The pre−/post-knowledge assessments and self-efficacy surveys demonstrated competency and confidence in participants’ knowledge and their willingness to utilize the skills and knowledge gained during the course. The consistent pre−/post-test knowledge improvement and overwhelmingly positive course feedback for both international and Ukrainian instructors demonstrated a degree of uniformity in the course instructor training and knowledge delivery to participants after transition to fully Ukrainian led instruction. Importantly, evaluation data demonstrated higher raw evaluation scores by Ukrainian instructors as compared to international instructors. This finding suggests the success and importance of the transition to locally taught courses. These findings also suggest other factors, including course delivery in maiden language without interpretation, educational delivery style, and other cultural considerations that may provide more effective course delivery. These findings should be referenced when considering future iterations of PTF across other contexts. Additionally, it is important to note that education is not valuable unless it reaches patient care. Learners reported having already taught information or skills to other medical providers and used those skills in the six to eight-week follow up surveys, indicating that this program is reaching the target population. There were several limitations to this study. six to eight-week feedback response rate was approximately 20%, likely biased to highly engaged instructors. This provides a limited understanding of how course participants continue to utilize the knowledge gained from PTF. Given the breadth and length of the intervention, 17 international instructors were required to undertake this intervention. This included instructors with backgrounds in pediatric emergency medicine, pediatric and general surgery, and adult emergency medicine. To mitigate variations in teaching content and quality of teaching, a detailed, point-by-point international instructor manual was provided to all instructors and discussed in depth during the pre-departure orientation. For those instructors with significant trauma experience but limited pediatric experience, a pediatric-specific pre-departure orientation was provided in addition to the required pre-departure orientation. Additional challenges to the standardization of classes included the risk of active conflict affecting course delivery. This reality included interruptions of classes due to air raid sirens, requiring courses to be held in bomb shelters, basements, or parking garages due to periods of high threat, and unmeasured psycho-social stressors that are ever present in a war-time society. Risks to personnel were mitigated through safety and security protocols and risk assessments by our partner organization, International Medical Corps. Despite these disruptive forces, monitoring and evaluation of the courses consistently demonstrated improvement in knowledge and skills and uniformity of classes over the course of the longitudinal intervention. The PTF educational initiative demonstrates a successful three-phase model for implementing an educational initiative for providers caring for children in active conflict zones. Despite the safety and security challenges, this model also demonstrates the value of an academic/non-governmental organization partnership to help mitigate risks through safety and security preparation, planning, and real time risk mitigation in an active conflict zone such as Ukraine. Ukrainian instructors provide course experiences similar or superior to international instructors, likely due to multiple factors related to language, culture and context. Finally, building partnerships between academic institutions is a proven and promising model for sustainability and localization of long-term training programs. |
Integrative Multi-Omics Approaches for Identifying and Characterizing Biological Elements in Crop Traits: Current Progress and Future Prospects | 636739f9-c804-474a-91f7-d9714ea434f2 | 11855028 | Biochemistry[mh] | The rapid rise in global population and the escalating unpredictability of climate patterns have intensified the urgency of improving crop productivity and quality, necessitating more robust and efficient strategies in agricultural science . One of the primary challenges faced by researchers and breeders of staple crops including rice, wheat, maize, and sugarcane is how to improve yield, quality, and survival rates despite the constantly changing conditions brought about by both biotic and abiotic environmental stresses. With the rapid advancements in sequencing and marker technologies, along with the widespread adoption of genome-based breeding methods , substantial investments have been made in multi-omics studies on crops. These efforts are bolstered by sophisticated algorithms and powerful computational resources, revolutionizing crop breeding from traditional phenotype-based selection to genomics-assisted breeding and genetic engineering . Advances in next-generation sequencing, biomolecular detection technologies, and bioinformatics have catalyzed significant progress across genomics, resequencing, functional genomics, epigenomics, transcriptomics, proteomics, metabolomics, ionomics, and microbiomics, transforming our approach to crop improvement . These omics approaches have become integral to crop improvement efforts, enabling more practical and precise elucidation of underlying genetic mechanisms and their influence on trait development . This transformative research encompasses a broad spectrum of topics, ranging from fundamental plant physiological processes to specific experimental goals aimed at identifying the most sensitive and altered molecular components under varying conditions. The integration of these methods has markedly advanced all stages of the breeding process, from the discovery of new genetic variations to more comprehensive and detailed phenotypic analyses, and the elucidation of important biological elements (including critical genes, transcription factors, and regulatory proteins) related to growth, disease resistance, stress response, and metabolic traits. Extensive multi-omics research has provided valuable insights into intriguing phenotypes and their adaptability to diverse environments. This knowledge is crucial for improving crop varieties by endowing them with adaptive traits. Recent years have witnessed a substantial reduction in the cost of generating multi-omics datasets, enabling the development of extensive interconnected datasets that provide a holistic view of crop biology. These datasets capture the features and impacts of genes, proteins, metabolites, and other components through numerous replicated samples under different experimental conditions. The comprehensive nature of these datasets enables a deeper understanding of critical biological elements and intricate molecular networks, facilitating the development of crops that are more resilient and productive in the face of global challenges. In this review, we summarize the current developments in various omics technologies and their recent advances in agronomic discovery of biological elements associated with important agronomic traits, and discuss the challenges currently encountered and prospects for the future. Multi-omics technologies, encompassing genomics, epigenomics, transcriptomics, metabolomics, ionomics, microbiomics, and beyond, provide an expansive and detailed perspective on the multifaceted traits of organisms. In the realm of crop research, these technologies are indispensable for elucidating the genetic underpinnings, environmental adaptability, and developmental processes of crops. By integrating data across various biological layers, multi-omics approaches enable a holistic understanding of crop biology, which is crucial for advancing agricultural science and improving crop performance . To illustrate the integration of multi-omics approaches, we use rice as an example, highlighting how these technologies can be applied to enhance crop research. First, the assembly and analysis of gap-free reference genome sequences for the elite rice varieties Zhenshan 97 and Minghui 63 have provided a model for studying heterosis and yield in rice . The availability of complete genome sequences significantly improves the quality of genome annotation, enabling researchers to better understand the complexity of genome structures, such as repetitive sequences, structural variations, and chromosomal rearrangements. With the rapid advancement of high-quality pan-genomes, researchers can more accurately identify and analyze large structural variations (SVs) and small single nucleotide polymorphisms (SNPs) that have significant phenotypic effects . By conducting genome-wide association studies of structural variations (SV-GWAS) and single nucleotide polymorphisms (SNP-GWAS), it becomes possible to reveal associations between these variations and complex traits. However, genomics alone may not provide a comprehensive understanding of how genetic variations manifest in biological functions. Therefore, integrating other omics techniques, such as transcriptomics, metabolomics, and epigenomics, can offer a more holistic view of the biological impact of genetic variations, thereby overcoming the limitations of genomics research. For instance, combining transcriptomic and translatomic data from Zhenshan 97 and Minghui 63 enabled the identification of key genes with allele-specific translation efficiency . These genes can be targeted in molecular breeding to enhance the performance of hybrid rice. Another study utilized ATAC-seq to map chromatin accessibility in six tissues of Zhenshan 97, while mGWAS identified regulatory loci for numerous metabolites, linking genotypes with phenotypes and deepening our understanding of gene regulation, thereby supporting trait improvement through breeding . Further research on Zhenshan 97 and Minghui 63 identified lncRNAs and circRNAs acting as competing endogenous RNAs, with the potential to regulate gene expression. For example, osa-156 l-5p (related to yield) and osa-miR444a-3p (related to nitrogen/phosphorus metabolism) were confirmed through dual-luciferase reporter assays to play crucial roles in rice growth and development, laying the foundation for future rice breeding analyses . Although microbiomics alone can characterize microbial communities, integrating it with metabolomics offers a more comprehensive perspective on plant–microbe–environment interactions. This holistic approach enables the development of more effective stress-resistance breeding strategies. In studies of rice pathogens, GWAS identified a novel gene, OsTPS1, which significantly enhances rice resistance to white leaf blight . Finally, combining spatial metabolomics with transcriptomics or proteomics allows researchers to explore the dynamic distribution of metabolites and proteins within plant tissues. This spatiotemporal resolution is essential for understanding the complex regulatory networks that govern plant development and stress responses . In summary, while single-omics approaches offer valuable insights, they often fall short of capturing the complexity of biological systems. Multi-omics integration addresses these limitations by providing a more comprehensive understanding of gene function, regulatory mechanisms, and phenotype expression, ultimately enhancing the precision and effectiveness of genetic breeding strategies. 2.1. Elucidating Gene Function and Genetic Variation in Crops Genomics is the scientific study of the structure and function of the genome of an organism, with a focus on structure, function, evolution, mapping, epigenome, mutant genes, and genome editing . With the advent of first-generation sequencing technologies and the development of next-generation sequencing technologies, researchers can now rapidly obtain the complete genome sequences of crops . Genome assembly reconstructs entire genome sequences from short sequencing reads, while genome annotation predicts functional regions within the genome, such as protein-coding genes, regulatory elements, and non-coding RNAs . Recent advances in long-read sequencing technologies have further enhanced the accuracy of these processes, particularly in resolving complex genomic regions . Genomics plays an indispensable role in elucidating genome structure and function, including identifying the locations of key genes and genetic variations, enabling researchers to pinpoint genes associated with specific traits, and providing molecular targets for crop genetic improvement. With the advancement of third-generation sequencing technologies like PacBio single-molecule real-time (SMRT) sequencing and Oxford Nanopore Technologies (ONT) ultra-long sequencing, coupled with the reduction in sequencing costs, an increasing number of telomere-to-telomere genomes and pan-genomes have been published . As vast amounts of genomic data are generated, comparative genomic analysis has become exceedingly important. Numerous comparative genomic tools have emerged, aiding in our understanding of candidate genes influenced by variations . Quantitative trait loci (QTL) mapping and genome-wide association studies (GWAS) are vital for understanding crop traits, with linkage analysis for QTL mapping serving as the direct precursor to association studies. GWAS were first demonstrated as an effective method for identifying genes associated with human diseases . Over the subsequent decades, GWAS have evolved into a powerful and widely used tool for elucidating complex traits, a progression primarily driven by advancements in genomic technologies that enable comprehensive examination of genetic variations across entire genomes within diverse populations . Building on this foundation, GWAS have been instrumental in identifying numerous loci linked to key agronomic traits in various crops, such as grain yield in rice and drought resistance in maize . These findings underscore the significance of GWAS in crop improvement. Various sophisticated methodologies have been developed to enhance the statistical power and computational efficiency of GWAS, facilitating the detection of genomic variations linked to traditional agronomic phenotypes as well as biochemical and molecular traits . These associations have significantly advanced gene cloning efforts and enabled the application of marker-assisted selection and genetic engineering, thereby expediting the process of crop breeding and improvement . The rise of high-throughput whole-genome sequencing technologies has transformed these methods into essential tools for cloning and identifying QTL in crops . If alleles are included in existing QTL mapping, QTL has proven to be highly useful for GWAS, serving as a complementary tool in the prioritization process of candidate loci . Moreover, the integration of high-throughput sequencing with advanced genotyping platforms has markedly improved the resolution and accuracy of QTL mapping and GWAS, particularly in identifying rare variants associated with complex traits . QTL mapping associates complex phenotypes with molecular marker data , while GWAS identify associations between genomic variations and traits . These methods provide powerful tools for understanding the genetic background and complex traits of crops. 2.2. Investigating Epigenetic Regulation and Its Influence on Gene Expression Epigenetics involves heritable changes other than DNA sequence alterations and focuses on genome-level modifications, mainly histone modifications, DNA methylation, and other techniques . Epigenomics studies the full set of these modifications in the cytogenetic material, revealing how factors associated with the growth of an organism, including environment and stress, affect gene expression and influence crop phenotypes . These epigenetic modifications can lead to changes in plant traits without altering the underlying genetic code. Epigenome maintenance is a continuous process that plays a key role in maintaining the stability of eukaryotic genomes by participating in biological mechanisms such as DNA repair . By studying epigenomics, researchers can better understand crop adaptation to environmental changes and the maintenance of genetic diversity. Histone modifications have been extensively studied using chromatin immunoprecipitation (ChIP) technology combined with DNA microarrays (ChIP-Chip) , providing insights into chromatin dynamics and gene regulation in plant genomes. However, with the advent of CUT&Tag, these analyses can now be performed with less starting material and at higher resolution, offering enhanced precision in genome-wide epigenomic profiling, even from limited plant tissue samples . DNA methylation is a crucial chromatin modification in plant genomes that is mitotically and sometimes meiotically heritable, and it is often associated with gene expression and phenotypic variation . The advent of sodium bisulfite conversion coupled with high-throughput sequencing in 2008, known as whole-genome bisulfite sequencing (WGBS), propelled the field into the single-base resolution era, significantly enhancing our understanding of how genomes employ DNA methylation . In plants, DNA methylation occurs in three distinct contexts: CG, CHG, and CHH, each governed by distinct enzymatic pathways. These methylation contexts not only influence gene expression but also play critical roles in regulating transposable elements and maintaining genomic integrity . Although whole-genome bisulfite sequencing (WGBS) has been available since 2008, it remains the most widely used method for studying 5-methylcytosine today . WGBS have been employed to sequence the methylomes of numerous crop species, including Oryza sativa , Zea mays , and Brassica napus . This approach has provided significant insights into the role of DNA methylation in crop biology. Chromatin accessibility, a key indicator of the openness of genomic regions to transcription factor binding, is crucial for understanding gene regulation, particularly under varying environmental conditions. Methods such as MNase-seq, DNase-seq, ATAC-seq, and FAIRE-seq are used to analyze the accessible chromatin landscape of cells . These methods are used to map genome-wide epigenetic profiles at single-base resolution by selectively isolating histone-bound or unbound DNA fragments and performing sequencing and reference genome comparisons . Through these advanced technological tools, researchers are able to gain a deeper understanding of the mechanism of epigenetics in the formation of important crop traits, thus providing a theoretical basis and technical support for crop genetic improvement and environmental adaptation. 2.3. Characterizing Gene Expression Profiles and Regulatory Networks Transcriptomics is a technology used to study the sum of all RNA transcripts of an organism, providing direct insight into real-time gene expression profiles. As sequencing technology has advanced, a range of powerful and precise techniques has emerged. Two mainly contemporary technologies are microarrays, which quantify a set of predetermined sequences , and RNA-Seq, which uses high-throughput sequencing to record all transcripts . Among these, RNA-Seq has emerged as a powerful and effective method for conducting large-scale transcriptome studies, especially in most non-model plants that lack high-quality reference genomes . Building on these methods, various integrative approaches have emerged. For instance, transcriptome-wide association studies (TWAS) have been utilized to analyze 275 young rice panicle transcriptomes, revealing thousands of genes associated with panicle traits . This approach sheds light on regulatory variations that influence panicle architecture and provides valuable insights into causal genes and gene regulatory networks in rice. Moreover, the development of a rice pan-transcriptome has facilitated the characterization of transcriptional regulatory landscapes in response to cold stress . This highlights the complexity of transcriptomic responses and underscores the importance of pan-transcriptomes in capturing the full spectrum of genetic diversity and regulatory mechanisms under stress conditions. Processing transcriptome data, particularly from RNA-Seq, demand significant computational resources due to the vast amount of parallel sequencing reads generated, necessitating advanced bioinformatics tools for accurate analysis . When studying a particular species, transcriptomic datasets are often used for gene co-expression analysis (e.g., weighted gene co-expression network analysis (WGCNA)) by methods such as Spearman correlation coefficient (SCC), Pearson correlation coefficient (PCC), and mutual rank (MR) . In order to identify unknown biosynthetic genes in a target pathway, key decoy genes are usually required . Successful co-expression analysis hinges on the accurate correlation between regulatory and functional genes, which is essential for elucidating gene networks involved in key biological processes. The reliability of transcriptome data can be validated through quantitative PCR (qPCR) . Functional validation is typically achieved through gene knockout or rescue experiments . These analytical steps ensure the accuracy and biological relevance of transcriptomic data, which helps to discover genes of interest and reveal mechanisms of gene expression regulation. 2.4. Deciphering Protein Interaction Networks and Functional Proteomes Metabolomics is considered the phenotypic endpoint of histological studies and aims to capture the end result of information transfer from the genome to the transcriptome and proteome through comprehensive qualitative and quantitative analyses of all small molecules in an organism . It provides a snapshot of the metabolic state of an organism, reflecting the biochemical activities and physiological status at a given time. Studies in this field cover the chemical processes of metabolites, small molecule substrates, intermediates, and cellular metabolites, aiming to reveal changes in metabolic pathways that may affect specific traits by analyzing metabolic small molecules in organisms . Metabolite determination methods in plant research are primarily categorized into targeted and non-targeted approaches. Targeted analysis focuses on quantifying specific metabolites by comparing them to known standards. This approach is highly sensitive and specific, making it suitable for hypothesis-driven studies where the metabolites of interest are predefined. In contrast, non-targeted analysis aims to discover and identify as many metabolites as possible without prior knowledge of their identity. This approach compares metabolites based on their relative intensities, providing a comprehensive overview of the metabolome and enabling the discovery of novel metabolites and pathways. Currently, metabolomics techniques rely on ultra-high-pressure liquid chromatography (UHPLC) combined with high-resolution mass spectrometry (HRMS) or NMR spectroscopy techniques . These advanced techniques are capable of providing detailed chemical information for accurate analysis and characterization of thousands of compounds . These techniques are further divided based on the detection methods used. Gas chromatography–mass spectrometry (GC-MS) is primarily employed for detecting volatile compounds, while liquid chromatography–mass spectrometry (LC-MS) is used for compounds that are less volatile and exhibit poor thermal stability . For instance, integrating genomic imprinting and metabolomic analyses in rice, such as those identifying flavonoid accumulation and stress tolerance genes , has unveiled pathways and genetic regulators that can be leveraged to boost disease resistance, seed vigor, and nutritional content . These insights provide a valuable foundation for future research aimed at enhancing plant traits through metabolomic studies. 2.5. Analyzing Plant-Microbiome Interactions and Microbial Diversity Plant microbiomics, also known as phytomicrobiomics, has rapidly emerged as a burgeoning field of research in recent years, owing to microorganisms playing a crucial role in plant health and productivity . The plant microbiome not only forms the basic foundation for plant growth but also plays a key role in enhancing plant resistance to environmental stresses and diseases. It has been shown that plants form complex symbiotic relationships with a variety of microbial communities that play important roles in plant physiology and ecology . These microbial communities inhabit the interior (endophytes) or surface (epiphytes) of plant tissues . In particular, inter-root microbial communities are located at the interface between roots and soil, facilitating the uptake of mineral nutrients by plants and helping to defend against pathogen invasion. For instance, nitrogen and phosphorus uptake in legumes is largely facilitated by symbiotic relationships with arbuscular mycorrhizal fungi, which enhance nutrient acquisition from the soil . Recent advancements in high-throughput sequencing and metagenomic techniques have significantly propelled our understanding of plant microbiomes, particularly in uncovering microbial contributions to plant health and productivity under diverse environmental conditions . Through metagenomics, researchers have been able to delve into the composition, function, and dynamics of the plant microbiome under different environmental conditions. Such studies have not only elucidated the complex interaction mechanisms between plants and microbes but also provided new perspectives on agricultural practices. A key discovery in plant microbiome research is the role of specific microorganisms in bolstering plant resistance to biotic and abiotic stresses, paving the way for the development of microbial-based biofertilizers and biopesticides . For example, research on plant microbiomes has revealed the enhancement of plant resistance by specific microorganisms, which provides a theoretical basis for the development of new biofertilizers and biopesticides . In addition, by understanding how microbes affect plant growth and development, researchers can design more sustainable agricultural management strategies. Genomics is the scientific study of the structure and function of the genome of an organism, with a focus on structure, function, evolution, mapping, epigenome, mutant genes, and genome editing . With the advent of first-generation sequencing technologies and the development of next-generation sequencing technologies, researchers can now rapidly obtain the complete genome sequences of crops . Genome assembly reconstructs entire genome sequences from short sequencing reads, while genome annotation predicts functional regions within the genome, such as protein-coding genes, regulatory elements, and non-coding RNAs . Recent advances in long-read sequencing technologies have further enhanced the accuracy of these processes, particularly in resolving complex genomic regions . Genomics plays an indispensable role in elucidating genome structure and function, including identifying the locations of key genes and genetic variations, enabling researchers to pinpoint genes associated with specific traits, and providing molecular targets for crop genetic improvement. With the advancement of third-generation sequencing technologies like PacBio single-molecule real-time (SMRT) sequencing and Oxford Nanopore Technologies (ONT) ultra-long sequencing, coupled with the reduction in sequencing costs, an increasing number of telomere-to-telomere genomes and pan-genomes have been published . As vast amounts of genomic data are generated, comparative genomic analysis has become exceedingly important. Numerous comparative genomic tools have emerged, aiding in our understanding of candidate genes influenced by variations . Quantitative trait loci (QTL) mapping and genome-wide association studies (GWAS) are vital for understanding crop traits, with linkage analysis for QTL mapping serving as the direct precursor to association studies. GWAS were first demonstrated as an effective method for identifying genes associated with human diseases . Over the subsequent decades, GWAS have evolved into a powerful and widely used tool for elucidating complex traits, a progression primarily driven by advancements in genomic technologies that enable comprehensive examination of genetic variations across entire genomes within diverse populations . Building on this foundation, GWAS have been instrumental in identifying numerous loci linked to key agronomic traits in various crops, such as grain yield in rice and drought resistance in maize . These findings underscore the significance of GWAS in crop improvement. Various sophisticated methodologies have been developed to enhance the statistical power and computational efficiency of GWAS, facilitating the detection of genomic variations linked to traditional agronomic phenotypes as well as biochemical and molecular traits . These associations have significantly advanced gene cloning efforts and enabled the application of marker-assisted selection and genetic engineering, thereby expediting the process of crop breeding and improvement . The rise of high-throughput whole-genome sequencing technologies has transformed these methods into essential tools for cloning and identifying QTL in crops . If alleles are included in existing QTL mapping, QTL has proven to be highly useful for GWAS, serving as a complementary tool in the prioritization process of candidate loci . Moreover, the integration of high-throughput sequencing with advanced genotyping platforms has markedly improved the resolution and accuracy of QTL mapping and GWAS, particularly in identifying rare variants associated with complex traits . QTL mapping associates complex phenotypes with molecular marker data , while GWAS identify associations between genomic variations and traits . These methods provide powerful tools for understanding the genetic background and complex traits of crops. Epigenetics involves heritable changes other than DNA sequence alterations and focuses on genome-level modifications, mainly histone modifications, DNA methylation, and other techniques . Epigenomics studies the full set of these modifications in the cytogenetic material, revealing how factors associated with the growth of an organism, including environment and stress, affect gene expression and influence crop phenotypes . These epigenetic modifications can lead to changes in plant traits without altering the underlying genetic code. Epigenome maintenance is a continuous process that plays a key role in maintaining the stability of eukaryotic genomes by participating in biological mechanisms such as DNA repair . By studying epigenomics, researchers can better understand crop adaptation to environmental changes and the maintenance of genetic diversity. Histone modifications have been extensively studied using chromatin immunoprecipitation (ChIP) technology combined with DNA microarrays (ChIP-Chip) , providing insights into chromatin dynamics and gene regulation in plant genomes. However, with the advent of CUT&Tag, these analyses can now be performed with less starting material and at higher resolution, offering enhanced precision in genome-wide epigenomic profiling, even from limited plant tissue samples . DNA methylation is a crucial chromatin modification in plant genomes that is mitotically and sometimes meiotically heritable, and it is often associated with gene expression and phenotypic variation . The advent of sodium bisulfite conversion coupled with high-throughput sequencing in 2008, known as whole-genome bisulfite sequencing (WGBS), propelled the field into the single-base resolution era, significantly enhancing our understanding of how genomes employ DNA methylation . In plants, DNA methylation occurs in three distinct contexts: CG, CHG, and CHH, each governed by distinct enzymatic pathways. These methylation contexts not only influence gene expression but also play critical roles in regulating transposable elements and maintaining genomic integrity . Although whole-genome bisulfite sequencing (WGBS) has been available since 2008, it remains the most widely used method for studying 5-methylcytosine today . WGBS have been employed to sequence the methylomes of numerous crop species, including Oryza sativa , Zea mays , and Brassica napus . This approach has provided significant insights into the role of DNA methylation in crop biology. Chromatin accessibility, a key indicator of the openness of genomic regions to transcription factor binding, is crucial for understanding gene regulation, particularly under varying environmental conditions. Methods such as MNase-seq, DNase-seq, ATAC-seq, and FAIRE-seq are used to analyze the accessible chromatin landscape of cells . These methods are used to map genome-wide epigenetic profiles at single-base resolution by selectively isolating histone-bound or unbound DNA fragments and performing sequencing and reference genome comparisons . Through these advanced technological tools, researchers are able to gain a deeper understanding of the mechanism of epigenetics in the formation of important crop traits, thus providing a theoretical basis and technical support for crop genetic improvement and environmental adaptation. Transcriptomics is a technology used to study the sum of all RNA transcripts of an organism, providing direct insight into real-time gene expression profiles. As sequencing technology has advanced, a range of powerful and precise techniques has emerged. Two mainly contemporary technologies are microarrays, which quantify a set of predetermined sequences , and RNA-Seq, which uses high-throughput sequencing to record all transcripts . Among these, RNA-Seq has emerged as a powerful and effective method for conducting large-scale transcriptome studies, especially in most non-model plants that lack high-quality reference genomes . Building on these methods, various integrative approaches have emerged. For instance, transcriptome-wide association studies (TWAS) have been utilized to analyze 275 young rice panicle transcriptomes, revealing thousands of genes associated with panicle traits . This approach sheds light on regulatory variations that influence panicle architecture and provides valuable insights into causal genes and gene regulatory networks in rice. Moreover, the development of a rice pan-transcriptome has facilitated the characterization of transcriptional regulatory landscapes in response to cold stress . This highlights the complexity of transcriptomic responses and underscores the importance of pan-transcriptomes in capturing the full spectrum of genetic diversity and regulatory mechanisms under stress conditions. Processing transcriptome data, particularly from RNA-Seq, demand significant computational resources due to the vast amount of parallel sequencing reads generated, necessitating advanced bioinformatics tools for accurate analysis . When studying a particular species, transcriptomic datasets are often used for gene co-expression analysis (e.g., weighted gene co-expression network analysis (WGCNA)) by methods such as Spearman correlation coefficient (SCC), Pearson correlation coefficient (PCC), and mutual rank (MR) . In order to identify unknown biosynthetic genes in a target pathway, key decoy genes are usually required . Successful co-expression analysis hinges on the accurate correlation between regulatory and functional genes, which is essential for elucidating gene networks involved in key biological processes. The reliability of transcriptome data can be validated through quantitative PCR (qPCR) . Functional validation is typically achieved through gene knockout or rescue experiments . These analytical steps ensure the accuracy and biological relevance of transcriptomic data, which helps to discover genes of interest and reveal mechanisms of gene expression regulation. Metabolomics is considered the phenotypic endpoint of histological studies and aims to capture the end result of information transfer from the genome to the transcriptome and proteome through comprehensive qualitative and quantitative analyses of all small molecules in an organism . It provides a snapshot of the metabolic state of an organism, reflecting the biochemical activities and physiological status at a given time. Studies in this field cover the chemical processes of metabolites, small molecule substrates, intermediates, and cellular metabolites, aiming to reveal changes in metabolic pathways that may affect specific traits by analyzing metabolic small molecules in organisms . Metabolite determination methods in plant research are primarily categorized into targeted and non-targeted approaches. Targeted analysis focuses on quantifying specific metabolites by comparing them to known standards. This approach is highly sensitive and specific, making it suitable for hypothesis-driven studies where the metabolites of interest are predefined. In contrast, non-targeted analysis aims to discover and identify as many metabolites as possible without prior knowledge of their identity. This approach compares metabolites based on their relative intensities, providing a comprehensive overview of the metabolome and enabling the discovery of novel metabolites and pathways. Currently, metabolomics techniques rely on ultra-high-pressure liquid chromatography (UHPLC) combined with high-resolution mass spectrometry (HRMS) or NMR spectroscopy techniques . These advanced techniques are capable of providing detailed chemical information for accurate analysis and characterization of thousands of compounds . These techniques are further divided based on the detection methods used. Gas chromatography–mass spectrometry (GC-MS) is primarily employed for detecting volatile compounds, while liquid chromatography–mass spectrometry (LC-MS) is used for compounds that are less volatile and exhibit poor thermal stability . For instance, integrating genomic imprinting and metabolomic analyses in rice, such as those identifying flavonoid accumulation and stress tolerance genes , has unveiled pathways and genetic regulators that can be leveraged to boost disease resistance, seed vigor, and nutritional content . These insights provide a valuable foundation for future research aimed at enhancing plant traits through metabolomic studies. Plant microbiomics, also known as phytomicrobiomics, has rapidly emerged as a burgeoning field of research in recent years, owing to microorganisms playing a crucial role in plant health and productivity . The plant microbiome not only forms the basic foundation for plant growth but also plays a key role in enhancing plant resistance to environmental stresses and diseases. It has been shown that plants form complex symbiotic relationships with a variety of microbial communities that play important roles in plant physiology and ecology . These microbial communities inhabit the interior (endophytes) or surface (epiphytes) of plant tissues . In particular, inter-root microbial communities are located at the interface between roots and soil, facilitating the uptake of mineral nutrients by plants and helping to defend against pathogen invasion. For instance, nitrogen and phosphorus uptake in legumes is largely facilitated by symbiotic relationships with arbuscular mycorrhizal fungi, which enhance nutrient acquisition from the soil . Recent advancements in high-throughput sequencing and metagenomic techniques have significantly propelled our understanding of plant microbiomes, particularly in uncovering microbial contributions to plant health and productivity under diverse environmental conditions . Through metagenomics, researchers have been able to delve into the composition, function, and dynamics of the plant microbiome under different environmental conditions. Such studies have not only elucidated the complex interaction mechanisms between plants and microbes but also provided new perspectives on agricultural practices. A key discovery in plant microbiome research is the role of specific microorganisms in bolstering plant resistance to biotic and abiotic stresses, paving the way for the development of microbial-based biofertilizers and biopesticides . For example, research on plant microbiomes has revealed the enhancement of plant resistance by specific microorganisms, which provides a theoretical basis for the development of new biofertilizers and biopesticides . In addition, by understanding how microbes affect plant growth and development, researchers can design more sustainable agricultural management strategies. Multi-omics technologies are central to advancing modern crop research, employing diverse strategies to pinpoint biological elements crucial to trait development and adaptation . Key biological components are identified and characterized through a thorough exploration of crop traits, encompassing agronomic performance (yield, quality, flavor, texture, etc.), responses to abiotic stresses (temperature extremes, drought, flooding, salinity, high light intensity, heavy metal toxicity), and resilience against biotic threats (fungi, bacteria, viruses, parasites, insects, weeds). These integrated investigations are pivotal for transformative innovations in agriculture. Multiple multi-omics analytical methods are listed in . This integrated approach not only promotes the effective utilization of crop genetic resources but also accelerates the innovation and implementation of novel crop breeding strategies, bringing an important driving force for global food security and sustainable agricultural development. 3.1. Exploration of Agronomic Traits Grain yield is a primary concern for many researchers. Multi-omics approaches have identified numerous QTLs related to yield in rice, wheat, and maize . In rice, comprehensive studies have identified critical QTL influencing traits like grain length and weight, notably with loci such as qGL11 associated with OsGH3.13 . In wheat, the integration of QTL mapping with WGCNA has uncovered candidate genes influencing plant height and spike length, while epigenomic approaches have clarified regulatory elements that modulate yield-related traits . In maize, QTLs like qKRN2 have been linked to ear row number, with priority genes identified through the integration of interlocking populations and transcriptomic data . In other crops, multi-omics approaches have also uncovered significant genetic insights. For example, transcriptomic data revealed that taller coconut varieties express more lignin biosynthesis genes, such as CCR and F5H, with GWAS confirming key SNPs in the promoter region of the GA20ox gene on chromosome 12 as regulators of height variation . In sea island cotton, bulked segregant analysis sequencing (BSA-seq), RNA-seq, and whole-genome resequencing analyses identified the qD07-NB locus on chromosome D07, linking a missense SNP in the candidate gene Gbar_D07G011870 to the nulliplex-branch trait . Thus, while QTL mapping identifies loci associated with crop yield, the approach yields numerous loci with varying precision. The integration of genomic, transcriptomic, and epigenomic approaches enables the precise identification of key genes affecting crop yield, thereby reducing both breeding time and costs. Oil content is a critical trait in oilseed crops like oilseed rape, peanuts, and soybeans. Researchers have utilized various technologies to advance understanding in this area. For instance, a high-density genetic map of peanuts was constructed using simplified genome sequencing of 120 samples, identifying 27 QTLs associated with kernel weight and size . Integrated approaches, including QTL-seq, QTL mapping, and RNA-seq, subsequently pinpointed major QTLs related to peanut seed weight . In Brassica napus, GWAS, TWAS, genomic selection, and gene module analysis on 505 inbred lines identified QTLs, genes, and regulatory networks associated with seed oil content (SOC) . Additionally, integrative analyses of transcriptomics, proteomics, and metabolomics identified spermidine synthase in soybean seeds, offering insights for enhancing seed oil content through molecular breeding strategies . Besides yield, the flavor, texture, color, and nutritional content of a crop are key traits that contribute to its excellence. In tomatoes, studies utilizing advanced algorithms and integrating transcriptomic, lipidomic, and metabolomic data have clarified regulatory mechanisms, such as AtMYB12’s involvement in flavonoid synthesis and SlERF.H6’s role in reducing bitterness . Similarly, integrative approaches in coconut , citrus , passion fruit , kale , cashew , and green pepper have identified pivotal genes and regulatory networks influencing traits such as lipid synthesis, flavonoid production, aroma compounds, and nutrient metabolism. These studies underscore the powerful potential of multi-omics technologies in crop physiology and quality control, revealing complex metabolic regulatory networks and gene expression mechanisms, which provide valuable scientific insights and methodological tools for improving crop quality. 3.2. Understanding Adaptation to Various Environmental Conditions Plants have evolved complex mechanisms to adapt to environmental stresses such as drought, extreme temperatures, high salinity, and heavy metal exposure. They detect stress signals through receptors and initiate signaling pathways. These pathways activate stress response genes through secondary messengers, resulting in the development of specific stress adaptation mechanisms at the transcriptional and translational levels. In studies on plant drought and heat tolerance, researchers have focused on identifying key physiological and molecular mechanisms that enable plants to withstand prolonged periods of water scarcity. In rapeseed, integrating GWAS and RNA-seq analysis of 119 varieties revealed novel SNPs linked to the drought tolerance gene ABCG16 . Similarly, analyses of Nagina 22 rice highlighted the role of auxiliary carbohydrate metabolism and L-phenylalanine biosynthesis in drought tolerance . In tomatoes, a metabolome genome-wide association study (mGWAS) identified gene clusters regulated by SlMYB13, enhancing drought resistance through phenylpropanoid metabolism . Maize studies revealed ZmHB77 as crucial for drought tolerance by regulating root architecture . Additionally, comprehensive multi-omics approaches in maize have identified numerous QTLs and significant SVs associated with drought tolerance . Recent advancements in plant cold tolerance research have significantly progressed through detailed studies on physiological responses and genetic mechanisms, aiming to enhance resilience in adverse climates. In rice, GWAS identified a QTL linked to seedling cold resistance, with the OsSEH1 gene playing a pivotal regulatory role . Comprehensive transcriptomic and metabolomic analyses demonstrated that OsSEH1 orchestrates gene expression and metabolite accumulation in the phenylpropanoid and flavonoid biosynthesis pathways. Additionally, the heightened sensitivity to exogenous abscisic acid (ABA) observed in the osseh1 mutant suggests that OsSEH1 regulates cold hardiness through ABA signaling pathways . In wheat, proteomic studies focusing on acetylation, complemented by multi-omics analyses, have identified the wheat cold stress-responsive protein TaPGK, underscoring its positive regulatory function in cold tolerance . Regarding peanuts, BSA-seq has been employed to identify QTLs and genes associated with cold tolerance during the seedling emergence stage . Understanding the strategies plants use to manage high salinity is crucial for developing salt-tolerant crops and enhancing agricultural sustainability. In Arabidopsis thaliana, mutants have demonstrated enhanced salt tolerance through the accumulation of stress-related metabolites . GWAS in wheat have identified loci associated with salt tolerance traits . Comprehensive metabolomics, proteomics, hormone, and ion analyses on date palms have highlighted mechanisms of salt avoidance and adaptation, including ion excretion and osmotic regulation . These findings provide essential molecular markers and genetic insights for crop breeding aimed at enhancing salt tolerance. Nitrogen uptake remains pivotal for crop productivity. In rice, GWAS have linked OsGATA8 to nitrogen uptake efficiency, influencing tillering . Research in Brassica rapa utilized multi-omics approaches to identify interactions within agricultural ecosystems, emphasizing the role of soil organic nitrogen in crop yield . In maize, extensive integration of multi-omics data has been employed to predict genes associated with nitrogen-use efficiency . These extensive studies unveil the complex genetic, molecular, and physiological adaptations that enable plants to survive environmental stresses. Through the integration of multi-omics approaches and cutting-edge genomic techniques, they advance our understanding of plant resilience and guide the development of robust crop varieties crucial for sustainable agriculture in the face of climate change. 3.3. Enhancing Resistance to Biological Stresses Plants have developed complex defense mechanisms to counter biotic stresses such as fungi, bacteria, viruses, parasites, and insects. They detect stress signals via specific receptors, activating pathways that trigger stress response genes. These genes are then expressed to mount effective defenses against various biotic stresses. Microbial interactions with plants are pervasive, driving extensive research aimed at deciphering these dynamics. In rice blast disease research, integrating multi-omics data with WGCNA and graph autoencoder techniques has revealed crucial Magnaporthe oryzae Oryzae small RNAs, rice genes, mRNAs, and proteins involved in significant biological processes . In another study, populations of chromosome segment substitution lines (CSSLs) in rice were used to identify QTLs and structural variants associated with rice blast resistance. This included the gene LOC_Os07g35680, which exhibits increased expression due to a 7.8 kb insertion in its wild allele . In studies of other rice pathogens, GWAS of over 200 rice populations identified a novel gene, OsTPS1, involved in synthesizing the sesquiterpene α-erythromycetin. It was demonstrated that OsTPS1 is epigenetically regulated by JMJ705 via the methyl jasmonate pathway, significantly enhancing rice resistance to white leaf blight . Similarly, studies in cotton revealed the MYB transcription factor RVE2 as key to Verticillium dahliae resistance . Research in Fagopyrum tataricum highlighted genes linked to resistance against Rhizoctonia solani, emphasizing the role of cytochrome P450 in flavonoid accumulation . Insects and pests pose significant challenges to crop cultivation. Early studies utilizing comparative proteomic and transcriptomic analyses identified 352 genes encoding secreted proteins from the salivary glands of rice pests TN1 and Mudgo, which interact with rice to influence its growth . Key proteins, including endo-β-1,4-glucanase (NlEG1) and NlSEF1, have been identified in the interaction between rice and the brown planthopper, underscoring their role in rice defense mechanisms . Shi et al. combined transcriptomics and metabolomics to compare Bph30 transgenic rice (BPH30T) with susceptible Nipponbare rice under brown planthopper (BPH) infestation, revealing that Bph30 likely enhances resistance by facilitating metabolite and hormone transport via the shikimic acid pathway . Additionally, the yellow stem borer (YSB) is a major threat to rice. Gokulan et al. used bulk-segregant analysis and next-generation sequencing to map a QTL interval for YSB resistance in the rice line SM92. Their transcriptome and metabolome analyses suggested a link between phenylpropanoid metabolism and YSB resistance, providing insights into plant defense mechanisms against this pest . Recent advancements in multi-omics technologies have profoundly deepened our understanding of plant–pathogen interactions, uncovering crucial molecular mechanisms of host resistance and pathogen virulence. These integrated approaches offer promising strategies for enhancing plant health and promoting agricultural sustainability. Grain yield is a primary concern for many researchers. Multi-omics approaches have identified numerous QTLs related to yield in rice, wheat, and maize . In rice, comprehensive studies have identified critical QTL influencing traits like grain length and weight, notably with loci such as qGL11 associated with OsGH3.13 . In wheat, the integration of QTL mapping with WGCNA has uncovered candidate genes influencing plant height and spike length, while epigenomic approaches have clarified regulatory elements that modulate yield-related traits . In maize, QTLs like qKRN2 have been linked to ear row number, with priority genes identified through the integration of interlocking populations and transcriptomic data . In other crops, multi-omics approaches have also uncovered significant genetic insights. For example, transcriptomic data revealed that taller coconut varieties express more lignin biosynthesis genes, such as CCR and F5H, with GWAS confirming key SNPs in the promoter region of the GA20ox gene on chromosome 12 as regulators of height variation . In sea island cotton, bulked segregant analysis sequencing (BSA-seq), RNA-seq, and whole-genome resequencing analyses identified the qD07-NB locus on chromosome D07, linking a missense SNP in the candidate gene Gbar_D07G011870 to the nulliplex-branch trait . Thus, while QTL mapping identifies loci associated with crop yield, the approach yields numerous loci with varying precision. The integration of genomic, transcriptomic, and epigenomic approaches enables the precise identification of key genes affecting crop yield, thereby reducing both breeding time and costs. Oil content is a critical trait in oilseed crops like oilseed rape, peanuts, and soybeans. Researchers have utilized various technologies to advance understanding in this area. For instance, a high-density genetic map of peanuts was constructed using simplified genome sequencing of 120 samples, identifying 27 QTLs associated with kernel weight and size . Integrated approaches, including QTL-seq, QTL mapping, and RNA-seq, subsequently pinpointed major QTLs related to peanut seed weight . In Brassica napus, GWAS, TWAS, genomic selection, and gene module analysis on 505 inbred lines identified QTLs, genes, and regulatory networks associated with seed oil content (SOC) . Additionally, integrative analyses of transcriptomics, proteomics, and metabolomics identified spermidine synthase in soybean seeds, offering insights for enhancing seed oil content through molecular breeding strategies . Besides yield, the flavor, texture, color, and nutritional content of a crop are key traits that contribute to its excellence. In tomatoes, studies utilizing advanced algorithms and integrating transcriptomic, lipidomic, and metabolomic data have clarified regulatory mechanisms, such as AtMYB12’s involvement in flavonoid synthesis and SlERF.H6’s role in reducing bitterness . Similarly, integrative approaches in coconut , citrus , passion fruit , kale , cashew , and green pepper have identified pivotal genes and regulatory networks influencing traits such as lipid synthesis, flavonoid production, aroma compounds, and nutrient metabolism. These studies underscore the powerful potential of multi-omics technologies in crop physiology and quality control, revealing complex metabolic regulatory networks and gene expression mechanisms, which provide valuable scientific insights and methodological tools for improving crop quality. Plants have evolved complex mechanisms to adapt to environmental stresses such as drought, extreme temperatures, high salinity, and heavy metal exposure. They detect stress signals through receptors and initiate signaling pathways. These pathways activate stress response genes through secondary messengers, resulting in the development of specific stress adaptation mechanisms at the transcriptional and translational levels. In studies on plant drought and heat tolerance, researchers have focused on identifying key physiological and molecular mechanisms that enable plants to withstand prolonged periods of water scarcity. In rapeseed, integrating GWAS and RNA-seq analysis of 119 varieties revealed novel SNPs linked to the drought tolerance gene ABCG16 . Similarly, analyses of Nagina 22 rice highlighted the role of auxiliary carbohydrate metabolism and L-phenylalanine biosynthesis in drought tolerance . In tomatoes, a metabolome genome-wide association study (mGWAS) identified gene clusters regulated by SlMYB13, enhancing drought resistance through phenylpropanoid metabolism . Maize studies revealed ZmHB77 as crucial for drought tolerance by regulating root architecture . Additionally, comprehensive multi-omics approaches in maize have identified numerous QTLs and significant SVs associated with drought tolerance . Recent advancements in plant cold tolerance research have significantly progressed through detailed studies on physiological responses and genetic mechanisms, aiming to enhance resilience in adverse climates. In rice, GWAS identified a QTL linked to seedling cold resistance, with the OsSEH1 gene playing a pivotal regulatory role . Comprehensive transcriptomic and metabolomic analyses demonstrated that OsSEH1 orchestrates gene expression and metabolite accumulation in the phenylpropanoid and flavonoid biosynthesis pathways. Additionally, the heightened sensitivity to exogenous abscisic acid (ABA) observed in the osseh1 mutant suggests that OsSEH1 regulates cold hardiness through ABA signaling pathways . In wheat, proteomic studies focusing on acetylation, complemented by multi-omics analyses, have identified the wheat cold stress-responsive protein TaPGK, underscoring its positive regulatory function in cold tolerance . Regarding peanuts, BSA-seq has been employed to identify QTLs and genes associated with cold tolerance during the seedling emergence stage . Understanding the strategies plants use to manage high salinity is crucial for developing salt-tolerant crops and enhancing agricultural sustainability. In Arabidopsis thaliana, mutants have demonstrated enhanced salt tolerance through the accumulation of stress-related metabolites . GWAS in wheat have identified loci associated with salt tolerance traits . Comprehensive metabolomics, proteomics, hormone, and ion analyses on date palms have highlighted mechanisms of salt avoidance and adaptation, including ion excretion and osmotic regulation . These findings provide essential molecular markers and genetic insights for crop breeding aimed at enhancing salt tolerance. Nitrogen uptake remains pivotal for crop productivity. In rice, GWAS have linked OsGATA8 to nitrogen uptake efficiency, influencing tillering . Research in Brassica rapa utilized multi-omics approaches to identify interactions within agricultural ecosystems, emphasizing the role of soil organic nitrogen in crop yield . In maize, extensive integration of multi-omics data has been employed to predict genes associated with nitrogen-use efficiency . These extensive studies unveil the complex genetic, molecular, and physiological adaptations that enable plants to survive environmental stresses. Through the integration of multi-omics approaches and cutting-edge genomic techniques, they advance our understanding of plant resilience and guide the development of robust crop varieties crucial for sustainable agriculture in the face of climate change. Plants have developed complex defense mechanisms to counter biotic stresses such as fungi, bacteria, viruses, parasites, and insects. They detect stress signals via specific receptors, activating pathways that trigger stress response genes. These genes are then expressed to mount effective defenses against various biotic stresses. Microbial interactions with plants are pervasive, driving extensive research aimed at deciphering these dynamics. In rice blast disease research, integrating multi-omics data with WGCNA and graph autoencoder techniques has revealed crucial Magnaporthe oryzae Oryzae small RNAs, rice genes, mRNAs, and proteins involved in significant biological processes . In another study, populations of chromosome segment substitution lines (CSSLs) in rice were used to identify QTLs and structural variants associated with rice blast resistance. This included the gene LOC_Os07g35680, which exhibits increased expression due to a 7.8 kb insertion in its wild allele . In studies of other rice pathogens, GWAS of over 200 rice populations identified a novel gene, OsTPS1, involved in synthesizing the sesquiterpene α-erythromycetin. It was demonstrated that OsTPS1 is epigenetically regulated by JMJ705 via the methyl jasmonate pathway, significantly enhancing rice resistance to white leaf blight . Similarly, studies in cotton revealed the MYB transcription factor RVE2 as key to Verticillium dahliae resistance . Research in Fagopyrum tataricum highlighted genes linked to resistance against Rhizoctonia solani, emphasizing the role of cytochrome P450 in flavonoid accumulation . Insects and pests pose significant challenges to crop cultivation. Early studies utilizing comparative proteomic and transcriptomic analyses identified 352 genes encoding secreted proteins from the salivary glands of rice pests TN1 and Mudgo, which interact with rice to influence its growth . Key proteins, including endo-β-1,4-glucanase (NlEG1) and NlSEF1, have been identified in the interaction between rice and the brown planthopper, underscoring their role in rice defense mechanisms . Shi et al. combined transcriptomics and metabolomics to compare Bph30 transgenic rice (BPH30T) with susceptible Nipponbare rice under brown planthopper (BPH) infestation, revealing that Bph30 likely enhances resistance by facilitating metabolite and hormone transport via the shikimic acid pathway . Additionally, the yellow stem borer (YSB) is a major threat to rice. Gokulan et al. used bulk-segregant analysis and next-generation sequencing to map a QTL interval for YSB resistance in the rice line SM92. Their transcriptome and metabolome analyses suggested a link between phenylpropanoid metabolism and YSB resistance, providing insights into plant defense mechanisms against this pest . Recent advancements in multi-omics technologies have profoundly deepened our understanding of plant–pathogen interactions, uncovering crucial molecular mechanisms of host resistance and pathogen virulence. These integrated approaches offer promising strategies for enhancing plant health and promoting agricultural sustainability. In recent years, the rapid advancement of multi-omics technologies has significantly transformed the landscape of biological research, with single-cell omics and spatial omics emerging as particularly groundbreaking fields. These advancements provide unprecedented opportunities to investigate cellular and molecular processes at an unparalleled resolution, enabling more precise dissection of biological mechanisms. Nevertheless, they also pose substantial challenges, including the requirement for advanced data integration techniques, substantial computational resources, and the innovation of novel experimental methodologies . Addressing these challenges will be pivotal for fully harnessing the potential of these technologies, thereby facilitating groundbreaking discoveries and pioneering applications across disciplines such as medicine, agriculture, and environmental science. 4.1. Advances in Single-Cell and Spatial Omics The concept and technology of single-cell RNA sequencing (scRNA-seq) were first introduced in 2009 . Single-cell multimodal omics and spatially resolved transcriptomics technologies were named Method of the Year by Nature Methods in 2019 and 2020, respectively . This pioneering approach allowed researchers to explore gene expression at the single-cell level, revealing cellular heterogeneity within tissues that was previously undetectable with bulk RNA sequencing methods. Following this initial breakthrough, a series of innovative sequencing technologies combining single-cell approaches with other omics disciplines have emerged. These advanced methods, including single-cell multi-omics and spatial transcriptomics, represent cutting-edge techniques for dissecting the complex architecture of tissues, organs, and entire organisms. They focus on identifying distinct cell types, characterizing their specific functions, and understanding how cellular interactions shape biological systems . While these methods have greatly improved the resolution and accuracy of scRNA-seq, the core principle of studying individual cells to uncover cellular heterogeneity remains a cornerstone of this field. As cells are considered the structural units of life, understanding the differences between cell types and their developmental trajectories is crucial for gaining deeper insights into the fundamental processes of life. One of the primary challenges with scRNA-seq is the requirement for tissue dissociation, which inevitably leads to the loss of spatial information. Spatial information is essential for understanding the interactions and regulatory processes between cells during development and their interactions with the environment. This has driven the development of spatial multi-omics, which aims to retain and utilize spatial context in conjunction with molecular data . Spatial transcriptomics (ST), for instance, maps the spatial distribution of cells within tissues and reveals local communication networks, underscoring the importance of integrating spatial and single-cell data to achieve a more comprehensive understanding of cellular dynamics and tissue organization . 4.2. Applications of Plant scRNA-Seq and ST Single-cell RNA expression profiling has rapidly become an indispensable method in various research fields involving humans, animals, and plants. This technology allows for unprecedented accuracy and speed in identifying rare and novel cell types within tissues, offering significant advantages over traditional bulk RNA sequencing methods . Due to these characteristics, scRNA-seq has been effectively used in disease diagnosis, therapeutic strategy development, and the exploration of developmental biology . In plant research, the gene expression patterns governing cellular development often vary considerably across distinct developmental stages, emphasizing the importance of single-cell analysis for uncovering these temporal changes . Studying these processes at the single-cell level is thus critical for unraveling the intricate mechanisms that drive plant development and differentiation. Several research groups have utilized high-throughput scRNA-seq and ST to study Arabidopsis thaliana, the most widely used model plant in molecular genetics . Other model plants, such as rice , tomato , and maize , have also been the subjects of extensive single-cell and ST studies. The availability of various web-based graphical resources for plant scRNA-seq data has further facilitated the accessibility and usability of these data for researchers. For example, detailed graphical information on plant scRNA-seq data can be accessed online, providing valuable insights and tools for further research . Despite significant progress in scRNA-seq, the inherent loss of spatial context continues to be a critical limitation. Early approaches, including in situ hybridization (ISH), single-molecule RNA fluorescence ISH, and laser capture microdissection (LCM), were employed to mitigate the spatial information loss inherent to scRNA-seq . However, these methods have not been widely adopted in plant research due to challenges such as the difficulty of plant cell wall hydrolysis and the diffusion of intracellular transcripts to the array surface. In 2017, Giacomello et al. made significant advancements by optimizing tissue fixation, staining, and permeabilization steps in ST technology . This led to the successful creation of the first spatial gene expression maps of whole transcriptomes in Arabidopsis thaliana inflorescence meristems, developing and dormant leaf buds of Populus tremula, and female cone buds of Abies fabri, demonstrating the feasibility of this technology in plant research. Following this, Ståhl et al. established a comprehensive spatial expression map of Arabidopsis, and Xia et al. characterized Arabidopsis leaves using single-cell Stereo-seq . Moreover, Stereo-seq has been used to construct spatiotemporal maps of other model organisms, such as Mus musculus, Drosophila, and Brachydanio rerio, highlighting its versatility and potential for broad applications . 4.3. Challenges and Future Prospects The integration of multi-omics data presents significant challenges due to the heterogeneous nature of the data across different platforms. One of the key challenges is the alignment and normalization of data originating from diverse sources, each with its own inherent biases, noise, and scaling issues. Additionally, the complexity of biological networks and the dynamic interplay between omics layers complicate the interpretation of integrated results, requiring advanced computational models that can handle large-scale, high-dimensional data while maintaining biological relevance. The early discoveries and applications of scRNA-seq were predominantly achieved in animal and human cells, leaving many challenges to be overcome in plant research . However, lessons learned from animal and human studies can help pave the way for advancements in plant research. Analyzing the application of these technologies in animal experiments can provide valuable guidance for designing experiments in plants. Single-cell gene expression data frequently exhibit high levels of noise, leading to erroneous clustering where cells of the same type may separate, and cells of different types may cluster together due to batch effects . This considerable noise poses significant challenges for data analysis and interpretation. Despite extensive research, several challenges remain for computational data integration, necessitating the development of new and improved integration methods . Furthermore, transcription is not the sole factor influencing plant development. Protein and metabolite levels also play crucial roles in regulating developmental processes. Therefore, a comprehensive approach that includes studying various cellular components is essential for accurately identifying the key factors involved in plant development. Spatial multi-omics technologies, which integrate metabolic and genetic information during development, enable the analysis of correlations between key metabolites and gene expression at single-cell resolution. Through spatial multi-omics, researchers can observe cell–cell interactions, gain deeper insights into cellular functions, identify rare cell populations, and characterize complex metabolites, thereby laying a strong foundation for advancing plant developmental biology . scRNA-seq has been proven to be one of the most transformative technologies in the life sciences, with applications spanning almost all areas of biological research. As the technology continues to evolve, significant progress is expected in several key areas, including increased throughput, reduced costs, and the incorporation of more modalities in a single assay. A promising future direction for scRNA-seq lies in its integration into routine clinical diagnostics and personalized medicine, where it could revolutionize disease classification and treatment strategies. Traditionally, the classification of cells and tissues has been based on their structure and function. To better understand the evolutionary and developmental relationships between tissues or cell types across species, conducting single-cell transcriptomic analyses across different species is essential . Two primary challenges in single-cell epigenomics analysis are the dissociation of cells or nuclei, which results in the loss of tissue context information, and the inefficiency and incompleteness of current technologies. Current single-cell epigenomic approaches typically capture only a small fraction of the epigenome per cell and input population. Improving the efficiency and comprehensiveness of these methods is vital for profiling scarce or rare clinical samples, ensuring that meaningful data can be extracted even from limited material . Similarly, two primary challenges in significantly advancing single-cell proteomics can be broadly categorized into the efficient transfer of proteins from individual cells to the MS detector and the enhancement of throughput without compromising coverage comprehensiveness . Despite these challenges, innovations in single-cell technologies are advancing genomics, transcriptomics, epigenomics, and proteomics, providing more profound insights into cellular diversity and functionality across biological systems. The advent of single-cell multi-omics is expected to revolutionize our understanding of cellular biology by enabling the simultaneous analysis of multiple omics data (genomics, transcriptomics, proteomics, metabolomics, etc.) from the same single cell. This comprehensive approach will provide deeper insights into how cellular-level variations influence ultimate phenotypic traits. Joint analysis of single-cell and other multi-omics data holds the promise of advancing our understanding of complex biological processes, paving the way for new discoveries and innovations in the field of life sciences. The concept and technology of single-cell RNA sequencing (scRNA-seq) were first introduced in 2009 . Single-cell multimodal omics and spatially resolved transcriptomics technologies were named Method of the Year by Nature Methods in 2019 and 2020, respectively . This pioneering approach allowed researchers to explore gene expression at the single-cell level, revealing cellular heterogeneity within tissues that was previously undetectable with bulk RNA sequencing methods. Following this initial breakthrough, a series of innovative sequencing technologies combining single-cell approaches with other omics disciplines have emerged. These advanced methods, including single-cell multi-omics and spatial transcriptomics, represent cutting-edge techniques for dissecting the complex architecture of tissues, organs, and entire organisms. They focus on identifying distinct cell types, characterizing their specific functions, and understanding how cellular interactions shape biological systems . While these methods have greatly improved the resolution and accuracy of scRNA-seq, the core principle of studying individual cells to uncover cellular heterogeneity remains a cornerstone of this field. As cells are considered the structural units of life, understanding the differences between cell types and their developmental trajectories is crucial for gaining deeper insights into the fundamental processes of life. One of the primary challenges with scRNA-seq is the requirement for tissue dissociation, which inevitably leads to the loss of spatial information. Spatial information is essential for understanding the interactions and regulatory processes between cells during development and their interactions with the environment. This has driven the development of spatial multi-omics, which aims to retain and utilize spatial context in conjunction with molecular data . Spatial transcriptomics (ST), for instance, maps the spatial distribution of cells within tissues and reveals local communication networks, underscoring the importance of integrating spatial and single-cell data to achieve a more comprehensive understanding of cellular dynamics and tissue organization . Single-cell RNA expression profiling has rapidly become an indispensable method in various research fields involving humans, animals, and plants. This technology allows for unprecedented accuracy and speed in identifying rare and novel cell types within tissues, offering significant advantages over traditional bulk RNA sequencing methods . Due to these characteristics, scRNA-seq has been effectively used in disease diagnosis, therapeutic strategy development, and the exploration of developmental biology . In plant research, the gene expression patterns governing cellular development often vary considerably across distinct developmental stages, emphasizing the importance of single-cell analysis for uncovering these temporal changes . Studying these processes at the single-cell level is thus critical for unraveling the intricate mechanisms that drive plant development and differentiation. Several research groups have utilized high-throughput scRNA-seq and ST to study Arabidopsis thaliana, the most widely used model plant in molecular genetics . Other model plants, such as rice , tomato , and maize , have also been the subjects of extensive single-cell and ST studies. The availability of various web-based graphical resources for plant scRNA-seq data has further facilitated the accessibility and usability of these data for researchers. For example, detailed graphical information on plant scRNA-seq data can be accessed online, providing valuable insights and tools for further research . Despite significant progress in scRNA-seq, the inherent loss of spatial context continues to be a critical limitation. Early approaches, including in situ hybridization (ISH), single-molecule RNA fluorescence ISH, and laser capture microdissection (LCM), were employed to mitigate the spatial information loss inherent to scRNA-seq . However, these methods have not been widely adopted in plant research due to challenges such as the difficulty of plant cell wall hydrolysis and the diffusion of intracellular transcripts to the array surface. In 2017, Giacomello et al. made significant advancements by optimizing tissue fixation, staining, and permeabilization steps in ST technology . This led to the successful creation of the first spatial gene expression maps of whole transcriptomes in Arabidopsis thaliana inflorescence meristems, developing and dormant leaf buds of Populus tremula, and female cone buds of Abies fabri, demonstrating the feasibility of this technology in plant research. Following this, Ståhl et al. established a comprehensive spatial expression map of Arabidopsis, and Xia et al. characterized Arabidopsis leaves using single-cell Stereo-seq . Moreover, Stereo-seq has been used to construct spatiotemporal maps of other model organisms, such as Mus musculus, Drosophila, and Brachydanio rerio, highlighting its versatility and potential for broad applications . The integration of multi-omics data presents significant challenges due to the heterogeneous nature of the data across different platforms. One of the key challenges is the alignment and normalization of data originating from diverse sources, each with its own inherent biases, noise, and scaling issues. Additionally, the complexity of biological networks and the dynamic interplay between omics layers complicate the interpretation of integrated results, requiring advanced computational models that can handle large-scale, high-dimensional data while maintaining biological relevance. The early discoveries and applications of scRNA-seq were predominantly achieved in animal and human cells, leaving many challenges to be overcome in plant research . However, lessons learned from animal and human studies can help pave the way for advancements in plant research. Analyzing the application of these technologies in animal experiments can provide valuable guidance for designing experiments in plants. Single-cell gene expression data frequently exhibit high levels of noise, leading to erroneous clustering where cells of the same type may separate, and cells of different types may cluster together due to batch effects . This considerable noise poses significant challenges for data analysis and interpretation. Despite extensive research, several challenges remain for computational data integration, necessitating the development of new and improved integration methods . Furthermore, transcription is not the sole factor influencing plant development. Protein and metabolite levels also play crucial roles in regulating developmental processes. Therefore, a comprehensive approach that includes studying various cellular components is essential for accurately identifying the key factors involved in plant development. Spatial multi-omics technologies, which integrate metabolic and genetic information during development, enable the analysis of correlations between key metabolites and gene expression at single-cell resolution. Through spatial multi-omics, researchers can observe cell–cell interactions, gain deeper insights into cellular functions, identify rare cell populations, and characterize complex metabolites, thereby laying a strong foundation for advancing plant developmental biology . scRNA-seq has been proven to be one of the most transformative technologies in the life sciences, with applications spanning almost all areas of biological research. As the technology continues to evolve, significant progress is expected in several key areas, including increased throughput, reduced costs, and the incorporation of more modalities in a single assay. A promising future direction for scRNA-seq lies in its integration into routine clinical diagnostics and personalized medicine, where it could revolutionize disease classification and treatment strategies. Traditionally, the classification of cells and tissues has been based on their structure and function. To better understand the evolutionary and developmental relationships between tissues or cell types across species, conducting single-cell transcriptomic analyses across different species is essential . Two primary challenges in single-cell epigenomics analysis are the dissociation of cells or nuclei, which results in the loss of tissue context information, and the inefficiency and incompleteness of current technologies. Current single-cell epigenomic approaches typically capture only a small fraction of the epigenome per cell and input population. Improving the efficiency and comprehensiveness of these methods is vital for profiling scarce or rare clinical samples, ensuring that meaningful data can be extracted even from limited material . Similarly, two primary challenges in significantly advancing single-cell proteomics can be broadly categorized into the efficient transfer of proteins from individual cells to the MS detector and the enhancement of throughput without compromising coverage comprehensiveness . Despite these challenges, innovations in single-cell technologies are advancing genomics, transcriptomics, epigenomics, and proteomics, providing more profound insights into cellular diversity and functionality across biological systems. The advent of single-cell multi-omics is expected to revolutionize our understanding of cellular biology by enabling the simultaneous analysis of multiple omics data (genomics, transcriptomics, proteomics, metabolomics, etc.) from the same single cell. This comprehensive approach will provide deeper insights into how cellular-level variations influence ultimate phenotypic traits. Joint analysis of single-cell and other multi-omics data holds the promise of advancing our understanding of complex biological processes, paving the way for new discoveries and innovations in the field of life sciences. The integration of multi-omics technologies has profoundly impacted crop research, yielding unparalleled insights into the genetic, molecular, and metabolic underpinnings of key agronomic traits and responses to environmental challenges. Genomics and transcriptomics have enabled precise identification of genes and pathways crucial for yield enhancement and stress tolerance, while proteomics and metabolomics have provided a deeper understanding of metabolic networks and defense responses against biotic and abiotic stresses. These advancements are accelerating breeding efforts and paving the way for more resilient and sustainable agricultural practices in the face of global challenges such as climate change. As we look ahead, overcoming challenges in data integration and computational analysis will be critical to fully harnessing the potential of multi-omics for predicting and manipulating complex traits in crops. By continuing to innovate and collaborate across disciplines, we can ensure a productive, resilient, and sustainable agricultural future, addressing food security needs and environmental sustainability on a global scale. |
Experience with the implementation of central venous catheters by medical oncologists in a non-surgical setting | 28b278fd-b1cc-48b2-a130-56f367b02e59 | 11775252 | Internal Medicine[mh] | The escalating need to administer systemic treatments in cancer patients, alongside parenteral nutrition, has become imperative, especially with the emergence of immunotherapy reliant on immune checkpoint inhibitors. This demand necessitates convenient venous access, not only for drug delivery but also for blood sample collection , . Since the 1980s, techniques have been developed to facilitate venous access, with two primary catheter models being used in Spanish public hospitals: peripherally inserted central catheters (PICC) and subcutaneously tunneled catheters with a subcutaneous reservoir (ports). PICCs offer ease and rapidity of placement (by nursing staff through basilic, brachial, or cephalic veins), albeit with a drawback of increased venous thromboembolic events, mandating prophylactic low-molecular-weight heparins (LMWH) . Port models, due to subcutaneous placement, present advantages such as reduced risk of local infections and prevention of accidental removal . These benefits associated with different models have significantly enhanced the quality of life for cancer patients, resulting in a growing demand for their placement in recent decades . Traditionally, the port implantation procedure has been carried out by specialized General and Vascular Surgery professionals, and more recently by interventional radiologists – . The procedure for tunneled catheters required patient admission, conducting the procedure in a surgical setting, and fluoroscopic control under anesthesia. This entire process imposes constraints related to infrastructure and personnel, escalating its overall cost. Conversely, studies comparing venous dissection techniques versus percutaneous puncture have revealed no differences in complication rates , . In light of this data, and the rising demand for venous access coupled with advances in knowledge and procedure simplification, the medical Medical Oncology team at the Central University Hospital of Asturias (HUCA) has devised a streamlined methodology for outpatient port implantation. This approach incorporates aseptic measures and stringent patient safety protocols, eliminating the previously described surgical environment and complexity, considering that recommendations for the use of ultrasound for venous catheterization (II, C), as well as fluoroscopic control of the catheter tip position (II, B), are not standardized according to therapeutic guidelines . This study outlines our experience and efficacy of this technique, comparing its safety and costs with the conventional surgical procedures documented in the literature and in our institution. An observational epidemiological study was conducted, analyzing the experience of oncology patients who underwent placement of a port system for systemic treatment or due to poor venous access, performed by the medical team of the Medical Oncology department at HUCA, utilizing the Seldinger technique and carried out following ESMO clinical practice guidelines for central venous access , . Prior to the procedure, an analytical study including coagulation tests and a complete blood count with normal values, as well as an updated chest radiographic study, was required. All patients signed an informed consent prior to the intervention. The procedure was carried out in a specially equipped room in the medical oncology day hospital, within a non-surgical environment with aseptic measures, assisted by a member of the nursing assistant staff. Patients were administered 1 mg of sublingual lorazepam as premedication, as well as subcutaneous local anesthesia (2% mepivacaine) in the venous access area and reservoir placement site. The utilization of ultrasound guidance for vascular access was not mandatory. Peri-procedural antibiotic therapy was not administered, and fluoroscopy was not utilized. A post-procedure chest X-ray image was used to verify correct catheter tip placement and absence of complications such as pneumothorax. Adult oncology patients with access to medical records, who had a port system implanted between 2015 and 2019, with a minimum follow-up of 1 month, were included in the study. Cases with less than one month of follow-up or lacking clinical information for evaluation variables were excluded. Demographic patient data (gender, age, tumor type), average duration of port systems, time to port complication, and time to removal due to complication were recorded. Median reservoir duration was defined as the time elapsed from the date of placement to its removal or patient death. Those not removed at the time of analysis were censored in the calculation of median catheter duration. Time to catheter complication was defined as period from its placement until an acute or late complication occurred; those without a complication at the time of analysis were censored. Time to catheter removal due to complication was defined as the time elapsed between the placement date and removal due to a complication; those without such an event at the time of analysis were censored. Immediate and late complications were documented, categorizing acute complications as those occurring within the first month after device placement, and late complications as those arising from the second month onwards. Procedure-related complications included the presence of arrhythmias, pneumothorax, thrombosis, malposition of the catheter tip, infection, surgical wound dehiscence, pressure ulcers/cutaneous necrosis, obstruction, rupture/perforation, local pain, and syncope. Pneumothorax or thrombosis events required confirmatory imaging, with chest X-ray and Doppler ultrasound chosen, respectively. Infections were confirmed by the presence of bacterial growth in a culture of cutaneous exudate or blood culture extracted through the central route. Remaining complications were diagnosed through clinical observation without the need for confirmatory tests. Data were analyzed by SPSS 13 software using descriptive. P < 0.05 was taken as statistically significant. For univariate comparison between groups considering the observation period, the Kaplan-Meier curve was calculated, and the log-rank test and log-rank test were used to compare the curves , . Cox regression analysis served as a univariate or multivariate model to quantify the independent contribution of one or more factors of interest on implant duration. The size of 500 patients was calculated to achieve an equivalent proportion of placements conducted by three specialists with enough time of follow to avoid person-dependent biases. Total cost spent on insertion of device (calculated in Euros) by medical oncologists was compared to those implanted by vascular radiologists in our institution taking into account he following items: Port system cost. Operating room cost per hour vs. Medical Oncology Day Hospital technical room cost per hour. Fluoroscopy control cost. Chest X-ray cost. Specialist physician cost per hour. Nursing cost per hour. Assistant nurse cost per hour. All economic data were provided by the hospital management. This is an observational study analyzing routine clinical practice in the placement of CVCs by medical oncologists at our hospital over recent years. The study was approved by Central University Hospital of Asturias (HUCA) which belongs to the Public Health System of the region of Asturias (SESPA). This study was conducted following the ESMO guidelines on catheter placement and in accordance with the Declaration of Helsinki. A total of 500 medical records of patients who underwent port placement between January 2015 and March 2019 were reviewed, with follow-up until 2022. All patients met the inclusion criteria, and none met the exclusion criteria, with all placement attempts being successful. The characteristics of the reviewed cases are presented in Table . Median age was 62 years (range 18–81) in all patients with 286 (57.2%) males and 214 (42.8%) females. At the time of the procedure, most of cases had stage IV disease (91.8% vs. 8.2% localized disease stages I-III) and digestive tumor pathology (79.4%). Right jugular vein access was the most frequently used route in 345 instances (69%), followed by the right subclavian vein in 144 procedures (28.8%). The left venous route was used in seven occasions via the jugular vein (1.4%) and four cases via the subclavian vein (0.8%). Safety analysis is shown in Table . A total of 49 complications were observed (9.8%), comprising 16 (3.2%) immediate complications and 33 (6.6%) late complications. Immediate complications included seven infections (1.4%), three catheter tip malpositions (0.6%), two pneumothorax incidents (0.4%), two pressure ulcers/cutaneous necrosis cases (0.4%), one episode of syncope (0.2%), and one thrombosis episode. Late complications consisted of twelve infections (2.4%), seven cases each of thrombosis and pressure ulcers/cutaneous necrosis (1.4%), three instances of local inflammation/pain without infection focus (0.6%), and two cases of catheter rupture and obstruction (0.2%). The median duration of the port systems was 470 days (95% confidence interval: 438–547). A total of 39 withdrawals were documented (7.8%), with six of them (1.2%) resulting from immediate complications, twenty-six (5.2%) due to late complications, and seven (1.4%) at patient request/end of treatment. No statistically significant relationship was found between age, sex, tumor type, and stage with the duration of the port. However, association tendency was observed with the side of placement, with left-sided placements (both jugular and subclavian) being linked to a higher rate of withdrawals (p: 0.05). Furthermore, the occurrence of both immediate and late complications was associated with an increased risk of withdrawals ( p < 0.0007). The median time from port placement to the occurrence of an immediate and late complications was 8 days (range: 0–26) and 209 days (32–938) respectively. The median time from port placement to withdrawal due to a complication or other reasons were 182 days (range: 0-938) and 204 days (range: 0-1461) respectively. The average cost for medical oncology and vascular radiology was determined as shown in Table . According to this analysis, a difference in fixed costs of 994.38 euros was detected for each port placement in favor of oncology day hospital. The increasing demand for systemic treatments in oncology has led to the need for permanent, safe, and convenient venous accesses for this purpose. Over the years, techniques have been developed simplifying their placement, positively impacting efficiency and the quality of life of oncology patients. They have transitioned from being implanted in a surgical setting to an alternative performed by interventional radiologists. In line with the goal of further simplifying the procedure, the present study has demonstrated the option of carrying it out by medical oncology specialists in a sterile technique room. The first objective of this observational study was to ensure an effectiveness and complication rate no higher with respect to data reported in the literature for conventional surgical or interventional radiology scenarios. In this regard, published data in the literature refer to a retrospective study of 368 cases, with immediate complication rates of 2.5% and 1.1% depending on the interventional radiology or surgical scenario respectively . Other studies report broader percentages ranging from 4 to 40% in general . On the other hand, a comparative study between surgical and radiological techniques for the implantation of venous ports in oncology patients described immediate complication rates of 9.2% versus 13.4% and delayed complication rates of 28% versus 27% respectively . The 3.2% immediate complications and 6.6% late complications registered in our experience, can be considered reasonable in fulfilling of safety compared to conventional procedures. There is disparity in the literature regarding the median duration of permanent CVCs, which is related to the follow-up time of the patients. Some studies reported a median duration of 354 days (range 3-1876) with a seven-year follow-up . Others reported up to 1401 days (range 1-2340) . Therefore, in the case of our results considering the follow-up time of the patients, they can be considered within the expected range. There is limited information in the literature regarding costs or efficiency in the placement of venous ports in oncology patients. In a comparative study between placement in an operating room and a non-surgical setting performed by surgeons without the need for fluoroscopic or ultrasound control, the non-surgical scenario showed similar complication rates and lower costs . Studies comparing the costs of placement in the operating room and interventional radiology suite have also yielded inconsistent findings. LaRoy et al. as well as Feo et al. determined that costs were approximately two times greater for placements in the operating room compared with placement in the interventional radiology suite. However, Marcy et al. found that the costs of placement in the operating room were 15% lower, and Sticca et al. reported costs of $749 lower per patient in favor of those placed by surgery. More recently, a study by Martin B. et al. in 2022, comparing placement by Interventional Radiology versus General Surgery, showed similar complication rates and lower costs for placement by interventional radiology. In this study, a cost difference of approximately $1500 was observed in favor of placement by interventional radiology. Our study chose to evaluate the cost from the perspective of our hospital and in relation to the reported costs by the medical institution for facility use and working hours of medical specialists, nursing staff, and nursing assistants, assuming a similar type of centrally purchased catheter and local anesthesia. According to this analysis, a difference in costs of 994.38 euros for each port placement in favor of oncology day hospital confirm a solid cost-saving option. We have not found references in the literature to similar studies conducted by medical oncologists analyzing the safety and cost of the procedure performed by non-surgical specialists. The study has several limitations, such as its non-direct comparative nature, the basic method used to assess efficiency, and the fact that it was carried out by oncology personnel trained in the placement of CVCs, which is not a standardized requirement in the field of medical oncology. This limits the applicability of the model in routine clinical practice. In conclusion, our study shows that, the placement of CVCs in a non-surgical setting by trained medical oncologists is safe, reliable, long lasting, and cost-saving options for long-term intravenous access in oncology patients as compared to conventional procedures. |
Performing otolaryngological outpatient consultation during the Covid-19 pandemic | 5b9d53fc-0da2-4283-854b-29fd0cdd1d9c | 7781528 | Otolaryngology[mh] | Voice-over (00.04) Since December 2019, Covid-19 rapidly became pandemic. (00.11) The outbreak imposed drastic changes in daily clinical practice. If in the first phase of the epidemic a suspension of all deferrable consultations was inevitable, with a decrease in consultations as high as 60%, the time has come for a gradual resumption of daily activities, which are to be performed avoiding nosocomial viral transmission and preserving healthcare providers from infection. . (00.35) Considering this, this video ( ) aims to provide a simple and concise example of outpatient ENT clinic management and focus on how to perform a complete endoscopic ear, nose, and throat examination. (00.52) Prior to the visit, a telephonic triage is undertaken in order to rule out the potential carriers of SARS-Co-V-2 infection by screening for signs and/or symptoms of Covid-19, such as fever, cough, rhino-conjunctivitis, difficulty in breathing, loss of sense of smell or taste, and for close contacts with a confirmed Covid-19 case within the last 14 days [ , , ]. (01.18) At the entrance of the healthcare facilities, patients are controlled in body temperature [ , , ]. (01.25) Only the patient is admitted into the visiting room, while attendants have to wait outside, except for children and dependents [ , , ]. (01.33) The visiting room is organized into two separate areas: the visiting area and a separate clean desk area where the assistant writes the report and handles patient's personal documentation . (01.45) Upon entering the room, the patient sits down directly on the visiting chair, where the medical interview takes place before the clinical examination is conducted. (01.55) Required instrumentation to perform the consultation is prepared in advance to avoid looking for required tools later on, risking unnecessary surface contamination . (02.07) Required PPE are prepared in advance as well, and the examiner is already dressed up with level II protection standards as required for endoscopic examination . (02.18) Otolaryngology consultations are considered at particular high risk due to close contact with patients' secretions . Endoscopic examination performed with a dedicated monitor avoiding using the eye-piece allows the examiner to maintain an adequate distance from the patient during the whole consultation . (02.38) The patient is comfortably seated, with the surgical mask covering the nose and the mouth, and slightly turns the head to ease ear inspection . Only in case more accurate ear conduct inspection or operative maneuvers are needed otoscopy is performed using the microscope. (02.57) The patient is asked to lower the protective mask on the mouth. If needed, nasal topic decongestion and anesthesia are to be performed using cottonoids rather than spray local anesthetics [ . (03.11) While standing at the right side of the patient, nasal endoscopy is performed, which allows a thorough inspection of the nasal fossa and the nasopharynx, to evaluate the conformation and the presence of nasal masses or pathological secretions . (03.28) The patient is then asked to remove the protective mask. Hard palate, cheeks, tongue, and vestibule are inspected, along with the palatine tonsils and the oropharynx. Soft palate and tongue motility is assessed as well. (03.46) Evaluation of the hypopharynx and larynx can be performed by inserting a 70° angled scope into the oropharynx during tongue protrusion . In this phase, asking the patient to concentrate on breathing can be helpful in inhibiting gag reflex. (04.03) Alternatively, laryngeal examination can be performed with a flexible scope. In doing so, the examiner still stands in the same position at the right side of the patient . (04.16) After completing the physical examination, the patient puts on the protective mask covering both the nose and the mouth. The examiner communicates the clinical findings, answers patient's questions and concludes the consultation. The second member of the medical staff writes the medical report and hands it to the patient, avoiding direct interpersonal contact . (04.39) After the consultation is over, the examiner removes all the PPE and washes his hands. High-level disinfection is performed for the equipment used during the examination, and all the other surfaces in the room are wiped with a disposable cloth dampened in alcohol- or sodium hypochlorite-based solution . (05.00) It is currently unknown how long it will take to restore pre-epidemic practice. Education about how to safely conduct ENT consultation might contribute to reduce the nosocomial transmission of SARS-Cov-2 and other viral respiratory infections. The following is the supplementary data related to this article. Video 1 Performing otolaryngological outpatient consultation during the Covid-19 pandemic. Video 1
|
Circulating RNA Markers Associated with Adenoma–Carcinoma Sequence in Colorectal Cancer | 574ecf71-a77c-444b-806e-0872bde9e26c | 11855670 | Biopsy[mh] | Colorectal cancer is the third leading cause of cancer-related morbidity and mortality worldwide, with over 1.8 million new cases and approximately 881,000 deaths each year . A key challenge in the management of colorectal cancer is the lack of early symptoms, which leads to a delayed diagnosis. Consequently, approximately 20% of colorectal cancer cases are detected at an advanced stage when metastatic disease is already present. Even among patients diagnosed at an early stage, more than 30% eventually develop metastatic disease, resulting in poor survival outcomes . Colorectal cancer typically arises from adenomatous polyps and benign growths in the colon or rectum, which may progress to malignant carcinoma via the adenoma–carcinoma sequence (ACS). This model, first described by Vogelstein et al., outlines the key genetic mutations driving colorectal cancer progression, including early mutations in the APC gene, followed by KRAS mutations during adenoma progression, and TP53 mutations frequently observed in invasive carcinomas . The early detection and removal of adenomatous polyps via a colonoscopy are essential to prevent colorectal cancer progression . Despite advances in understanding colorectal cancer’s molecular pathogenesis, clinical applications of this knowledge are limited by the need for invasive tissue sampling. Non-invasive methods, such as stool-based DNA tests, have been developed for colorectal cancer screening, but they have limited sensitivity, detecting only 43% of advanced adenomas and 14% of non-advanced adenomas . These limitations emphasize the need for more sensitive and non-invasive biomarkers that can detect colorectal cancer at earlier stages and track disease progression. Liquid biopsies have emerged as a promising alternative for cancer detection, enabling the non-invasive monitoring of tumor-associated biomarkers, including circulating tumor DNA (ctDNA), circulating tumor cells (CTCs), and circulating RNA transcripts [ , , ]. While ctDNA primarily reflects genetic mutations and CTCs provide insights into metastatic potential, circulating RNA transcripts offer a more dynamic perspective by capturing real-time transcriptional activity within tumor-associated cells . Unlike extracellular miRNAs, which are passively released and often reflect systemic changes rather than tumor-specific processes, intracellular circulating RNA transcripts provide a functional snapshot of active gene expression, particularly in immune responses and tumor–microenvironment interactions . This highlights their potential as sensitive biomarkers for detecting the adenoma–carcinoma sequence (ACS) and monitoring early colorectal cancer (CRC) progression. In recent years, circulating RNA transcripts have shown potential as diagnostic and prognostic tools for several cancers, including colorectal cancer. RNA molecules, often protected within extracellular vesicles or associated with proteins, can provide valuable insights into tumor activity and advances in other cancer types, such as breast cancer, where key RNA markers, such as EPCAM , KRT19 , ERBB2 , and others, have been identified. However, the full potential of circulating RNA in colorectal cancer remains underexplored . This study aimed to address this gap by investigating the utility of circulating RNA transcripts in detecting colorectal cancer at different stages of progression, from benign adenomas to malignant carcinomas. By focusing on the ACS, which characterizes colorectal cancer development, this study aimed to identify novel circulating RNA biomarkers that could improve early detection; provide real-time insights into tumor biology; and contribute to the development of more sensitive, non-invasive screening tools. High-throughput RNA sequencing was employed to profile circulating RNA from healthy controls (HCs); symptomatic non-disease controls (NDCs); and patients with non-advanced adenomas, advanced adenomas, and colorectal cancer ( ), with the goal of identifying differentially expressed genes (DEGs) that can enhance diagnostic accuracy and prognostication .
2.1. The Selection of Circulating Transcripts Associated with the ACS via RNA Sequencing To identify novel circulating transcripts associated with colorectal cancer progression, RNA sequencing was performed on 100 samples (20 from each group: HC, NDC, non-advanced adenoma, advanced adenoma, and colorectal cancer). Of the 46,427 initial genes, 14,844 were analyzed after excluding those with a count value of zero in at least one sample. DEGs between groups were identified using a |log 2 fold change| of ≥2 and a p value of < 0.05. A total of 187 significant DEGs were detected across 10 comparison pairs (NDC vs. HC, advanced adenoma vs. non-advanced adenoma, and colorectal cancer vs. advanced adenoma). These DEGs were further examined for their biological significance to the adenoma–carcinoma sequence ( ). 2.2. Circulating Transcripts in the Transition from HC to NDC We identified 24 significant DEGs on comparison between the HC and NDC groups. The most prominent gene, IFI27 , was also significantly upregulated. Functional annotation through a GO analysis showed that the identified genes were involved in RNA processing, specifically, mRNA splicing and immune-related processes. These findings suggest early immune system activation even in the absence of a confirmed pathology, as observed in the NDC group ( ). 2.3. Transition from NDC to Non-Advanced Adenoma Fourteen significant DEGs were identified in the non-advanced adenoma vs. NDC comparison. A GO analysis revealed that genes such as DEFA4 , IGHG1 , and IGLC2 are involved in immune responses, especially antigen-binding and neutrophil-mediated defense mechanisms. This immune activation indicated that during the early stages of adenoma formation, the immune system is actively engaged in surveillance against tumor progression ( ). 2.4. Transition from Non-Advanced Adenoma to Advanced Adenoma Seventeen DEGs were observed during the transition from non-advanced adenoma to advanced adenoma, with genes such as FCGR1A and S100P showing reduced expression. These DEGs were associated with immune regulation and antibody-dependent cellular cytotoxicity. Interestingly, non-coding RNAs, such as LOC107984755 and RPL29P4 , were upregulated, reflecting a shift towards altered cellular regulation as the adenomas became more advanced ( ). 2.5. Transition from Advanced Adenoma to Colorectal Cancer A total of 86 DEGs were identified during the advanced adenoma-to-colorectal cancer transition. Notably, the upregulation of CD177 , a marker of neutrophil activity, suggests a critical role of neutrophil-mediated responses in the transition from adenoma to carcinoma. Other key transcripts, such as MPO and DEFA3 , further emphasize the role of neutrophil extracellular traps and their immune activity in tumor progression. The transition to malignancy is characterized by increased immune response activation and cellular damage repair mechanisms. The upregulation of immune-related genes such as MPO and DEFA3 highlights their potential as biomarkers for the detection of advanced colorectal cancer stages ( ). 2.6. Protein–Protein Interaction and Pathway Analyses A STRING network analysis revealed strong interactions between key genes such as MPO , DEFA4 , and CD177 across the ACS. These interactions suggest that immune processes, particularly neutrophil-mediated responses, are central to colorectal cancer progression. A KEGG pathway analysis further supported this finding by identifying neutrophil extracellular trap formation and Fc gamma receptor-mediated phagocytosis as critical immune-related pathways in the adenoma–carcinoma transition ( , , , and ). 2.7. Clinical Validation of Candidate Biomarkers An RT-qPCR analysis was performed to validate FCGR1A and MPO levels in 20 samples, each from the advanced adenoma, non-advanced adenoma, and HC groups. FCGR1A was significantly upregulated in the advanced adenoma group compared with the HC group, indicating its potential as an early biomarker of adenomas. Although upregulated in the non-advanced adenoma group, the FCGR1A difference was not statistically significant. MPO , however, was significantly upregulated in the non-advanced and advanced adenoma groups, confirming its role as a circulating biomarker throughout the ACS. These findings support the utility of MPO and FCGR1A as potential non-invasive biomarkers for colorectal cancer screening and monitoring, especially throughout the ACS ( ).
To identify novel circulating transcripts associated with colorectal cancer progression, RNA sequencing was performed on 100 samples (20 from each group: HC, NDC, non-advanced adenoma, advanced adenoma, and colorectal cancer). Of the 46,427 initial genes, 14,844 were analyzed after excluding those with a count value of zero in at least one sample. DEGs between groups were identified using a |log 2 fold change| of ≥2 and a p value of < 0.05. A total of 187 significant DEGs were detected across 10 comparison pairs (NDC vs. HC, advanced adenoma vs. non-advanced adenoma, and colorectal cancer vs. advanced adenoma). These DEGs were further examined for their biological significance to the adenoma–carcinoma sequence ( ).
We identified 24 significant DEGs on comparison between the HC and NDC groups. The most prominent gene, IFI27 , was also significantly upregulated. Functional annotation through a GO analysis showed that the identified genes were involved in RNA processing, specifically, mRNA splicing and immune-related processes. These findings suggest early immune system activation even in the absence of a confirmed pathology, as observed in the NDC group ( ).
Fourteen significant DEGs were identified in the non-advanced adenoma vs. NDC comparison. A GO analysis revealed that genes such as DEFA4 , IGHG1 , and IGLC2 are involved in immune responses, especially antigen-binding and neutrophil-mediated defense mechanisms. This immune activation indicated that during the early stages of adenoma formation, the immune system is actively engaged in surveillance against tumor progression ( ).
Seventeen DEGs were observed during the transition from non-advanced adenoma to advanced adenoma, with genes such as FCGR1A and S100P showing reduced expression. These DEGs were associated with immune regulation and antibody-dependent cellular cytotoxicity. Interestingly, non-coding RNAs, such as LOC107984755 and RPL29P4 , were upregulated, reflecting a shift towards altered cellular regulation as the adenomas became more advanced ( ).
A total of 86 DEGs were identified during the advanced adenoma-to-colorectal cancer transition. Notably, the upregulation of CD177 , a marker of neutrophil activity, suggests a critical role of neutrophil-mediated responses in the transition from adenoma to carcinoma. Other key transcripts, such as MPO and DEFA3 , further emphasize the role of neutrophil extracellular traps and their immune activity in tumor progression. The transition to malignancy is characterized by increased immune response activation and cellular damage repair mechanisms. The upregulation of immune-related genes such as MPO and DEFA3 highlights their potential as biomarkers for the detection of advanced colorectal cancer stages ( ).
A STRING network analysis revealed strong interactions between key genes such as MPO , DEFA4 , and CD177 across the ACS. These interactions suggest that immune processes, particularly neutrophil-mediated responses, are central to colorectal cancer progression. A KEGG pathway analysis further supported this finding by identifying neutrophil extracellular trap formation and Fc gamma receptor-mediated phagocytosis as critical immune-related pathways in the adenoma–carcinoma transition ( , , , and ).
An RT-qPCR analysis was performed to validate FCGR1A and MPO levels in 20 samples, each from the advanced adenoma, non-advanced adenoma, and HC groups. FCGR1A was significantly upregulated in the advanced adenoma group compared with the HC group, indicating its potential as an early biomarker of adenomas. Although upregulated in the non-advanced adenoma group, the FCGR1A difference was not statistically significant. MPO , however, was significantly upregulated in the non-advanced and advanced adenoma groups, confirming its role as a circulating biomarker throughout the ACS. These findings support the utility of MPO and FCGR1A as potential non-invasive biomarkers for colorectal cancer screening and monitoring, especially throughout the ACS ( ).
This study provides a comprehensive analysis of circulating RNA transcripts associated with colorectal cancer progression, particularly within the ACS. A total of 187 DEGs were identified with significant enrichment in immune response pathways, specifically, those involving neutrophil activity. Key transcripts, such as MPO , FCGR1A , DEFA4 , and CD177 , have been highlighted as potential biomarkers of colorectal cancer, underscoring the role of immune dysregulation in tumor development and progression . Our findings emphasize the immune system’s pivotal role in CRC progression, particularly highlighting the involvement of neutrophils, which promote tumor development through mechanisms such as neutrophil extracellular traps (NETs) . We found that DEFA4 is associated with non-advanced adenoma, MPO with advanced adenoma, and CD177 with CRC, suggesting their potential as biomarkers for distinct ACS stages. In the early stages of CRC, neutrophils, along with DEFA4, mediate an initial immune response to microbial and endogenous stimuli. As immune complexes, including cytokines, accumulate, FCGR1A, expressed on neutrophil membranes, interacts with these complexes, facilitating NETosis. During the advanced adenoma (AA) stage, NETosis leads to reactive oxygen species (ROS) production, triggering MPO release, which further amplifies cellular damage and promotes tumor progression . Our study substantiates previous findings that neutrophils act as a ‘double-edged sword’, providing beneficial immune responses but, under certain conditions, exacerbating tumorigenesis. By elucidating the stage-specific roles of circulating RNA markers in neutrophil-driven immune mechanisms, our findings advance the understanding of CRC pathophysiology and highlight novel avenues for early detection and therapeutic intervention. The upregulation of MPO and CD177 , markers closely associated with neutrophil activity, supports the hypothesis that innate immune responses are a driving force in the transition from adenomas to carcinoma . Compared to prior studies focusing on tissue-based biomarkers or adaptive immune responses, our research, using circulating RNA from whole blood, provides a non-invasive alternative. The inclusion of immune-related transcripts, such as MPO and FCGR1A , in the early and advanced stages of colorectal cancer highlights the value of liquid biopsy for cancer detection . Advanced adenomas (AAs) are associated with a 2.7-fold higher incidence of CRC and a 2.6-fold increase in CRC-related mortality compared to normal or non-advanced adenomas . Therefore, early CRC detection at the advanced adenoma stage is critical for timely intervention and improved patient outcomes. This study highlights MPO and FCGR1A as particularly compelling biomarkers for both advanced adenoma detection and early CRC monitoring, addressing key clinical unmet needs in CRC diagnosis and prognosis. One of the significant challenges in clinical practice is the lack of clear guidelines on when to remove precancerous polyps and how to manage recurrent polyps, which places a considerable burden on clinicians. While the probability of a polyp progressing to malignancy is generally low, studies have reported an increased risk of metastasis triggered by immune responses following the surgical removal of recurrent polyps . However, there is a lack of definitive evidence and reliable monitoring markers to guide these medical decisions. Our study provides novel insights into the role of neutrophils in CRC progression, particularly through MPO and FCGR1A, which may serve as critical markers in understanding immune-medicated tumor progression. A deeper characterization of neutrophil involvement, as revealed in this study, offers potential solutions to these unresolved clinical challenges and may contribute to refining CRC screening, surveillance, and treatment strategies. Although similar studies have explored the roles of ctDNA and CTCs, this study demonstrates the potential of RNA biomarkers, particularly those reflecting immune activity . However, our study diverges from previous research that primarily emphasizes adaptive immune responses, such as T-cell involvement, in colorectal cancer progression . In contrast, we found that innate immune responses, particularly those mediated by neutrophils, play a central role in the early stages of adenoma development. This difference could be attributed to the sample type (whole blood vs. tissue) and the broader representation of immune cell types, including neutrophils, in our analysis . Additionally, whole-blood RNA sequencing provides a more holistic view of circulating immune cells and offers unique insights into colorectal cancer biology . Despite these promising findings, this study has some limitations. The sample size for RNA sequencing was relatively small, and while the clinical validation of MPO and FCGR1A is encouraging, larger studies with more diverse patient populations are needed to confirm their diagnostic utility . Furthermore, our study focuses on circulating RNA transcripts, which, when combined with other liquid biopsy markers, such as ctDNA and CTCs, could potentially yield even more robust diagnostic tools . Future studies will be needed to validate these RNA biomarkers in larger multicenter cohorts and to explore their prognostic value over time. Additionally, integrating transcriptomic data with other circulating biomarkers could enhance diagnostic precision and provide insights into dynamic changes in tumor biology during colorectal cancer progression and treatment . Overall, this study contributes to the growing body of evidence showing that neutrophil activity plays a significant role in colorectal cancer progression. The identification of circulating biomarkers such as MPO and FCGR1A could contribute to the development of non-invasive screening tools for early colorectal cancer detection, offering potential clinical benefits through more personalized approaches to colorectal cancer management.
4.1. Study Participants This study was approved by the Institutional Review Boards of Severance Hospital (approval no. 4-2017-0148), Gangnam Severance Hospital (approval no. 3-2017-0024), Gangbuk Samsung Hospital (approval no. 2017-02-022-009), and the Medical Checkup Center of Wonju Severance Christian Hospital (approval no. CR319115). A total of 160 blood samples were collected from individuals scheduled for a colonoscopy at these institutions between 2017 and 2023, with all blood samples collected prior to the colonoscopy procedure. The participants included adults aged 19 years or older who provided written informed consent and were either scheduled for a colonoscopy during routine health screenings or who presented with gastrointestinal symptoms at a gastroenterology clinic. The exclusion criteria included a lack of consent, intellectual disabilities or severe psychiatric disorders, a history of malignancy or curative treatments within the previous 5 years, recent use of immunosuppressive drugs (within 6 months), and pregnancy. One hundred samples were used for next-generation sequencing and 60 samples for qualitative reverse transcription PCR (RT-qPCR). Blood samples were divided into five groups based on the colonoscopy and histological results, including dysplasia grade level, villous component protein, size, and number of polyps, according to the European Society of Gastrointestinal Endoscopy. The samples were classified into colorectal cancer, advanced adenoma, non-advanced adenoma, NDC, and HC groups. The samples used in this study were randomly selected from each group and are summarized in . 4.2. Blood Collection and RNA Isolation Blood samples (3 mL) were collected via venipuncture using Tempus TM Blood RNA Tubes (Applied Biosystems, Chicago, IL, USA) to avoid epithelial cell contamination. The tubes were vortexed for 10 s to ensure mixing with a stabilizing reagent. Blood samples were stored at either 4 °C for up to seven days or frozen at −20 °C until used for RNA isolation. The total RNA was extracted using the Tempus TM Spin RNA Isolation Kit (Applied Biosystems), following the manufacturer’s protocol. RNA quality was assessed using the Agilent 2200 TapeStation System (Agilent Technologies, Santa Clara, CA, USA). Samples with an RNA Integrity Number (RIN) greater than 7.0 were selected for further analysis. 4.3. cDNA Synthesis Complementary DNA (cDNA) was synthesized using M-MLV reverse transcriptase (Invitrogen, Carlsbad, CA, USA); random hexamers (Invitrogen); and a dNTP mixture (Intron Biotechnology, Seongnam, Republic of Korea). cDNA was synthesized according to the manufacturer’s instructions. 4.4. RNA Sequencing and Differential Gene Expression Analysis RNA sequencing was conducted at Macrogen (Seoul, Republic of Korea). The RNA concentration was quantified using the Quant-IT TM RiboGreen RNA Assay Kit (Invitrogen, Carlsbad, CA, USA), and RNA Integrity was confirmed using the Agilent 2200 TapeStation System (Agilent Technologies). Samples with a RIN of >7.0 were selected for library construction. RNA libraries were prepared using the TruSeq ® Stranded Total RNA with Ribo-Zero Globin Kit (Illumina, San Diego, CA, USA) with poly (A) selection and fragmentation at 94 °C for 8 min, targeting an insert size of approximately 300 bp. Sequencing was performed at a depth of 30 million reads per sample to ensure sufficient coverage for the differential expression analysis. 4.5. Gene-Enrichment and Functional Annotation Analysis A gene-enrichment analysis was performed using g:Profiler [RRID:SCR_006809] (Version e1 1o_eg57_p18_4b54a898) to evaluate the gene ontology (GO) and biological pathways. DEGs were selected using an adjusted p value of <0.05 and a log 2 fold change (FC) of >2. 4.6. Clustering Heatmap Analysis Clustering heatmaps for DEGs were generated using MultiExperiment Viewer (MeV Version 4.9.0), which is a Java-based desktop application for gene expression analysis and visualization. 4.7. Protein–Protein Interaction Network Analysis Protein–protein interaction networks were constructed using STRING [RRID:SCR_005223] (Version 12.0), with an interaction score of >0.4 for medium confidence. Network topology parameters were calculated and visualized to explore the interaction clusters. 4.8. GO Analysis I A GO analysis of DEGs was conducted using the ClueGO (RRID:SCR_005748) plugin (version 2.5.10) and the CluePedia plugin (Version 1.5.10) in Cytoscape [RRID:SCR_003032] (Version 3.10.0). Functional correlations were explored using hypergeometric testing, and significant pathways were selected for further investigation. 4.9. GO Analysis II Gene Set Enrichment Analysis (GSEA) [RRID:SCR_003199] was used to assess concordant differences in predefined gene sets between the biological states. The enrichment scores (ESs) and p values were calculated based on the expression profiles. 4.10. Gene Pathway Analysis Gene pathways were analyzed using the Kyoto Encyclopedia of Genes and Genomes (KEGG) [RRID:SCR_012773] database to gain insights into the biological functions and pathways involved in the sample groups. 4.11. Quantitative PCR Assay Quantitative PCR (qPCR) was performed to quantify gene expression using a StepOnePlus TM Real-Time PCR System software v2.0.2 (Applied Biosystems). The quantification cycle (Cq) method was used, with GAPDH as the reference gene for normalization. Thermal cycling conditions were set at 95 °C for 10 min, followed by 40 cycles at 95 °C for 15 s and at 60 °C for 1 min. Relative gene expression levels were calculated using the 2 −ΔCq method. All qPCR reactions included non-template controls and were performed in triplicate for each sample to ensure reproducibility. 4.12. Statistical Analysis All statistical analyses were performed using GraphPad Prism [RRID:SCR_002798] (version 9.0). Differences between groups were assessed using Student’s t -test. Statistical significance was set at p < 0.05.
This study was approved by the Institutional Review Boards of Severance Hospital (approval no. 4-2017-0148), Gangnam Severance Hospital (approval no. 3-2017-0024), Gangbuk Samsung Hospital (approval no. 2017-02-022-009), and the Medical Checkup Center of Wonju Severance Christian Hospital (approval no. CR319115). A total of 160 blood samples were collected from individuals scheduled for a colonoscopy at these institutions between 2017 and 2023, with all blood samples collected prior to the colonoscopy procedure. The participants included adults aged 19 years or older who provided written informed consent and were either scheduled for a colonoscopy during routine health screenings or who presented with gastrointestinal symptoms at a gastroenterology clinic. The exclusion criteria included a lack of consent, intellectual disabilities or severe psychiatric disorders, a history of malignancy or curative treatments within the previous 5 years, recent use of immunosuppressive drugs (within 6 months), and pregnancy. One hundred samples were used for next-generation sequencing and 60 samples for qualitative reverse transcription PCR (RT-qPCR). Blood samples were divided into five groups based on the colonoscopy and histological results, including dysplasia grade level, villous component protein, size, and number of polyps, according to the European Society of Gastrointestinal Endoscopy. The samples were classified into colorectal cancer, advanced adenoma, non-advanced adenoma, NDC, and HC groups. The samples used in this study were randomly selected from each group and are summarized in .
Blood samples (3 mL) were collected via venipuncture using Tempus TM Blood RNA Tubes (Applied Biosystems, Chicago, IL, USA) to avoid epithelial cell contamination. The tubes were vortexed for 10 s to ensure mixing with a stabilizing reagent. Blood samples were stored at either 4 °C for up to seven days or frozen at −20 °C until used for RNA isolation. The total RNA was extracted using the Tempus TM Spin RNA Isolation Kit (Applied Biosystems), following the manufacturer’s protocol. RNA quality was assessed using the Agilent 2200 TapeStation System (Agilent Technologies, Santa Clara, CA, USA). Samples with an RNA Integrity Number (RIN) greater than 7.0 were selected for further analysis.
Complementary DNA (cDNA) was synthesized using M-MLV reverse transcriptase (Invitrogen, Carlsbad, CA, USA); random hexamers (Invitrogen); and a dNTP mixture (Intron Biotechnology, Seongnam, Republic of Korea). cDNA was synthesized according to the manufacturer’s instructions.
RNA sequencing was conducted at Macrogen (Seoul, Republic of Korea). The RNA concentration was quantified using the Quant-IT TM RiboGreen RNA Assay Kit (Invitrogen, Carlsbad, CA, USA), and RNA Integrity was confirmed using the Agilent 2200 TapeStation System (Agilent Technologies). Samples with a RIN of >7.0 were selected for library construction. RNA libraries were prepared using the TruSeq ® Stranded Total RNA with Ribo-Zero Globin Kit (Illumina, San Diego, CA, USA) with poly (A) selection and fragmentation at 94 °C for 8 min, targeting an insert size of approximately 300 bp. Sequencing was performed at a depth of 30 million reads per sample to ensure sufficient coverage for the differential expression analysis.
A gene-enrichment analysis was performed using g:Profiler [RRID:SCR_006809] (Version e1 1o_eg57_p18_4b54a898) to evaluate the gene ontology (GO) and biological pathways. DEGs were selected using an adjusted p value of <0.05 and a log 2 fold change (FC) of >2.
Clustering heatmaps for DEGs were generated using MultiExperiment Viewer (MeV Version 4.9.0), which is a Java-based desktop application for gene expression analysis and visualization.
Protein–protein interaction networks were constructed using STRING [RRID:SCR_005223] (Version 12.0), with an interaction score of >0.4 for medium confidence. Network topology parameters were calculated and visualized to explore the interaction clusters.
A GO analysis of DEGs was conducted using the ClueGO (RRID:SCR_005748) plugin (version 2.5.10) and the CluePedia plugin (Version 1.5.10) in Cytoscape [RRID:SCR_003032] (Version 3.10.0). Functional correlations were explored using hypergeometric testing, and significant pathways were selected for further investigation.
Gene Set Enrichment Analysis (GSEA) [RRID:SCR_003199] was used to assess concordant differences in predefined gene sets between the biological states. The enrichment scores (ESs) and p values were calculated based on the expression profiles.
Gene pathways were analyzed using the Kyoto Encyclopedia of Genes and Genomes (KEGG) [RRID:SCR_012773] database to gain insights into the biological functions and pathways involved in the sample groups.
Quantitative PCR (qPCR) was performed to quantify gene expression using a StepOnePlus TM Real-Time PCR System software v2.0.2 (Applied Biosystems). The quantification cycle (Cq) method was used, with GAPDH as the reference gene for normalization. Thermal cycling conditions were set at 95 °C for 10 min, followed by 40 cycles at 95 °C for 15 s and at 60 °C for 1 min. Relative gene expression levels were calculated using the 2 −ΔCq method. All qPCR reactions included non-template controls and were performed in triplicate for each sample to ensure reproducibility.
All statistical analyses were performed using GraphPad Prism [RRID:SCR_002798] (version 9.0). Differences between groups were assessed using Student’s t -test. Statistical significance was set at p < 0.05.
This study identified key circulating RNA transcripts, including MPO , FCGR1A , DEFA4 , and CD177 , that play crucial roles in ACS progression. These results highlight the importance of immune responses, particularly neutrophil-mediated mechanisms, in colorectal cancer development. By focusing on circulating RNA biomarkers, this study offers a promising non-invasive approach for detecting and monitoring colorectal cancer. Clinical validation of MPO and FCGR1A underscores their potential as biomarkers for early adenoma detection and advanced colorectal cancer monitoring. Future research should validate these findings in larger cohorts and integrate them with other biomarkers to enhance colorectal cancer screening accuracy, potentially leading to earlier interventions and more personalized treatment strategies.
|
Fast and Simple Protocol for N-Glycome Analysis of Human Blood Plasma Proteome | 9558fd3c-60dc-431c-9027-dc441c378d27 | 11673551 | Biochemistry[mh] | Protein glycosylation is one of the most common post- and co-translational modifications that significantly enhances the diversity of proteins . The conformation of proteins, along with their properties and activity, is considerably influenced by the attachment of oligosaccharides . This affects receptor affinity, substrate specificity, and protein half-life . Glycosylation primarily occurs in the Golgi apparatus, with a minor presence in the endoplasmic reticulum. The unique characteristics of glycosylation enzymes enable the production of numerous oligosaccharide combinations, resulting in vast diversity within the human glycoproteome . The regulation of this diversity is influenced by genetic, epigenetic, and environmental factors . The glycosylation profile typically remains stable within certain boundaries in healthy individuals . However, it can experience significant alterations due to pathological conditions such as type I and type II diabetes , malignant tumors , cardiovascular diseases , Parkinson’s disease , and other disorders . Additionally, the glycan profile is known to be associated with a person’s gender, age, and lifestyle . An example of age-related changes in immunoglobulin G (IgG)-linked glycan structures includes an increase in bisection and a decrease in galactosylation and sialylation . In contrast, during pregnancy, the opposite pattern is observed, with increased galactosylation and sialylation and decreased bisection . Human saliva, serum, and plasma are easily accessible biological materials that can be used for N-glycome analysis . N-glycome profiling can provide significant insights into health and disease. For example, a recent report has highlighted the possibility of using N-glycan analysis in conjunction with genotyping to identify Alzheimer’s disease at its early stages [ , , ]. Currently, the process of obtaining a glycan profile is considered to be relatively simple and routine. However, it does require specialized and expensive equipment, thus restricting the capacity to conduct such studies without substantial funding. We propose a newly developed protocol that is based on the same biochemical principles as the methodology for obtaining N-glycan fractions from human blood plasma described by Reiding et al. and Hennig et al. . We succeeded in adapting the process for more accessible equipment available in almost any molecular biology laboratory (amplifier, shaker, centrifuge, capillary sequencer). Our method has somewhat lower throughput but does make the analysis of N-glycosylation more accessible.
2.1. Materials The analysis was conducted using blood samples provided by four participants of a disease-oriented Russian disc degeneration study (RuDDs). Plasma sampling was performed at the Novosibirsk Research Institute of Traumatology and Orthopedics according to a previously published protocol . The following reagents were used: SDS, Igepal CA-630, PBS, APTS (8-Aminopyrene-1,3,6-Trisulfonic Acid, Trisodium Salt), 2-picoline borane, DMSO, citric acid, acetonitrile and triethylamine (Sigma Aldrich, St. Louis, MO, USA), Biogel P10 (Bio-Rad, Hercules, CA, USA), LIZ500 length marker and Hi-Di Formamide (Applied Biosystems, Carlsbad, CA, USA), and PNGase F (Promega, Madison, WI, USA). The laboratory plasticware included 1.5 mL tubes and 0.2 mL PCR tubes (Axygen Scientific, Inc., Union City, CA, USA), a 0.22 µm filter mini column with 2 mL tubes with lids (Nordic Biosite, Täby, Sweden), and hematology tubes with EDTA (ApexLab, Moscow, Russia). The following equipment was used: +4 °C refrigerator with temperature re-mode, −80 °C Kelvinator, CM-6MT centrifuge (Elmi, Riga, Latvia), 5418R centrifuge (Eppendorf, Nijmegen, the Netherlands), OS-20 orbital shaker (Biosan, Riga, Latvia), vortex V-1 plus (Biosan, Riga, Latvia), M111-02-96 amplifier (Bis-N, Koltsovo, Russia), and 3130XL automated gene analyzer (Applied Biosystems, Thermo Scientific, Waltham, MA, USA). The data were processed and analyzed using custom in-house Python scripts available upon reasonable request. Electrophoretic signal processing and subsequent demarcation of migration peaks were performed using the SciPy and NumPy libraries. Dynamic time warping (DTW) was performed using the DTAIDistance python module . 2.2. Methods 2.2.1. Blood Plasma Preparation All blood samples in the study were obtained from healthy participants. The collection of blood samples from participants was performed using hematology tubes. The tubes were subjected to a gentle mixing process to prevent bubbling. Following this, they were incubated at room temperature for a period of 30 min. Subsequently, centrifugation was performed at a speed of 1100 rcf for 10 min at room temperature to facilitate plasma separation. The next step involved transferring the plasma into 1 mL tubes and placing them in a Kelvinator operating at −80 °C for preservation. Prior to the isolation of N-glycans, the sample underwent thawing, followed by the extraction of a plasma aliquot for the analysis and subsequent freezing of the remaining sample. 2.2.2. Isolation of N-Glycans A total volume of 4 μL of a 2% SDS solution dissolved in water was introduced into a PCR tube containing 2 μL of human plasma. The contents were mixed thoroughly using a pipette. Subsequently, the tubes containing the samples were put into an amplifier, with the reaction mixture being maintained at a temperature of 65 °C and the lid temperature set at 104 °C for 10 min. Next, an aliquot of 2 μL from an aqueous solution containing 8% Igepal CA-630 was added and mixed meticulously through pipetting. The resulting mixture was left on an orbital shaker at 450 rpm for 3 min to induce protein denaturation. While the mixture was being incubated on the shaker, the necessary steps were taken to prepare the PNGase F solution for glycosylation: 2 μL of 5×PBS with 0.12 μL of PNGase F per sample were mixed. A volume of 2 μL from this solution was introduced into the blood plasma sample. The mixture was thoroughly agitated, sealed, and incubated within the amplifier for 3 h (at a reaction mixture temperature of 37 °C and lid temperature of 104 °C). During this stage, plasma proteins were subjected to deglycosylation. Once the deglycosylation process was complete, a volume of 2 µL of the reaction mixture was transferred into a new PCR tube. Following that, a mixture of 2 μL APTS (30 mM APTS in 3.6 M citric acid) and 2 μL of 2-picoline borane (1.2 M in DMSO) was carefully added to the tube by gently pouring it along the inner wall. Once sealed, the tube was vortexed for 15 s at the lowest speed and then placed in an amplifier for 16 h. The reaction occurred at a temperature of 37 °C, while the lid temperature was maintained at 104 °C. This procedure resulted in fluorescently labeled N-glycans. The reaction was halted by introducing 100 μL of cold 80% acetonitrile in water, and the tubes were then refrigerated until the columns were prepared for glycan purification (incubation at a maximum of 37 °C for 1 h). The column preparation involved the use of 200 μL of solvent, which was a 10% BioGel P10 solution made up of water, acetonitrile, and 96% ethanol in a ratio of 7:2:1. This solvent was applied to a membrane filter column and then centrifuged at 1000 rcf for 1 min to eliminate the solvent. Next, 200 μL of water was added to each column, and the columns were centrifuged once more at 1000 rcf for 1 min. The step of rinsing the columns with water was performed two more times, bringing the total number of water rinses to three. Then, 200 μL of an 80% acetonitrile solution in water was added to the column, followed by centrifugation at 1000 rcf for 1 min to eliminate the solvent. Thorough cleaning was ensured by washing the columns with 80% acetonitrile three times, with two additional repetitions of this step. The completion of these procedures made the columns suitable for purifying glycans. The reaction mixture containing APTS-labeled N-glycans and excess dye was vigorously mixed using a vortex, following which the entire volume of the mixture (106 μL) was carefully transferred to the prepared column. The column was then placed on an orbital shaker, operating at a speed of 100 rpm, for 5 min. To remove the solvent from the column, a centrifugation step was used, which lasted for 1 min at a speed of 200 rcf. Next, to remove excess dye, the column was washed with 200 μL of an 80% acetonitrile solution in water, which contained 100 mM trimethylamine. The column was placed on an orbital shaker (100 rpm) for 2 min and then centrifuged to remove the solvent (1 min, 200 rcf). The part of the procedure from the addition of 200 µL of 80% acetonitrile in water solution containing 100 mM trimethylamine until centrifugation was repeated 4 more times. As a result, the column should be subjected to 5 washes with this solution. Afterwards, the N-glycan sample was washed to ensure the complete removal of trimethylamine. For this purpose, a 200 μL aliquot of an 80% acetonitrile solution in water was introduced into the column. The column was subsequently positioned on an orbital shaker set at 100 rpm for a duration of 2 min, followed by centrifugation at 200 rcf for 1 min to eliminate the solvent. Following the addition of 200 µL of an 80% acetonitrile solution in water, the centrifugation step should be repeated twice. Additionally, the column must be washed a total of three times with this solution. Elution was carried out by applying 50 μL of water to the column, followed by placing the column on an orbital shaker set at 100 rpm for 5 min. Following that, the column underwent centrifugation at a speed of 200 rcf for a duration of 2 min, resulting in the collection of the eluent in a sterile tube. Next, 100 μL of water was applied to the column. The column was placed on an orbital shaker (100 rpm) for 5 min and then centrifuged (2 min, 200 rcf). The eluent was collected into a tube with the previous fraction. The process was subsequently repeated once more. The total volume of solution containing the N-glycan fraction in the tube should amount to approximately 200 µL. 2.2.3. Separation of the N-Glycan Fraction by Capillary Gel Electrophoresis A working mixture consisting of 3 µL of N-glycan fraction, 1 µL of LIZ500 size standard diluted to 1:50, and 6 µL of HiDi formamide (total volume of 10 µL) was prepared for analysis. The separation of fluorescently labeled N-glycans was performed on a 3130XL genetic analyzer calibrated using the G5 dye set in an 80 cm long 16-capillary array filled with NimaPOP-7 polymer (NimaGene). A standard electrophoresis module (GeneScan 80 POP7) for fragment analysis on the 80 cm capillary assembly was modified as follows. The oven temperature parameter was decreased to 30 °C. Injection voltage and time were increased to 15 kV and 20 s, respectively. After the injection of the samples prepared in HiDi formamide, capillary electrophoresis was conducted at a standard voltage of 14.6 kV and a modified oven temperature of 30 °C. Data were collected over 7000 s and then analyzed. 2.2.4. Electropherogram Analysis The data generated by the sequencer in ABIF format were used to extract “DATA 1” and “DATA 105”, corresponding to a wavelength with maxima at 522 nm and 655 nm for APTS-labeled N-glycome specter and LIZ-labeled oligonucleotide comigrated standard, respectively. Subsequently, migration scale normalization was carried out using the migration data of the oligonucleotide standard, and the target region was precisely excised within a range of 150 to 350 nucleotides. The signal was denoized using a one-dimensional Gaussian filter with a standard deviation equal to 5 measurement points. Subsequently, baseline correction was carried out utilizing the ARPLS method . The determination of peak maxima involved identifying local maxima in the processed spectra, with a height and topological prominence of no less than 1% of the highest point in the glycomic region. The peak boundaries were determined by taking into account the standard deviation of the Gaussian curve fitted to the peak, with a range of ±3 standard deviations from the center point of the maximum. Subsequently, the area of the peak was calculated using Simpson’s integration method. Detected peaks were annotated by comparison to a reference spectrum from Reiding et al. . The annotation process involved aligning the spectra using DTW. A match between a pair of peaks was established if an optimal warping path connected these two peaks. Additionally, manual peak matching was performed through visual inspection. Structures corresponding to matched peaks in Reiding et al.’s reference spectrum were then assigned to the corresponding peaks detected in this study.
The analysis was conducted using blood samples provided by four participants of a disease-oriented Russian disc degeneration study (RuDDs). Plasma sampling was performed at the Novosibirsk Research Institute of Traumatology and Orthopedics according to a previously published protocol . The following reagents were used: SDS, Igepal CA-630, PBS, APTS (8-Aminopyrene-1,3,6-Trisulfonic Acid, Trisodium Salt), 2-picoline borane, DMSO, citric acid, acetonitrile and triethylamine (Sigma Aldrich, St. Louis, MO, USA), Biogel P10 (Bio-Rad, Hercules, CA, USA), LIZ500 length marker and Hi-Di Formamide (Applied Biosystems, Carlsbad, CA, USA), and PNGase F (Promega, Madison, WI, USA). The laboratory plasticware included 1.5 mL tubes and 0.2 mL PCR tubes (Axygen Scientific, Inc., Union City, CA, USA), a 0.22 µm filter mini column with 2 mL tubes with lids (Nordic Biosite, Täby, Sweden), and hematology tubes with EDTA (ApexLab, Moscow, Russia). The following equipment was used: +4 °C refrigerator with temperature re-mode, −80 °C Kelvinator, CM-6MT centrifuge (Elmi, Riga, Latvia), 5418R centrifuge (Eppendorf, Nijmegen, the Netherlands), OS-20 orbital shaker (Biosan, Riga, Latvia), vortex V-1 plus (Biosan, Riga, Latvia), M111-02-96 amplifier (Bis-N, Koltsovo, Russia), and 3130XL automated gene analyzer (Applied Biosystems, Thermo Scientific, Waltham, MA, USA). The data were processed and analyzed using custom in-house Python scripts available upon reasonable request. Electrophoretic signal processing and subsequent demarcation of migration peaks were performed using the SciPy and NumPy libraries. Dynamic time warping (DTW) was performed using the DTAIDistance python module .
2.2.1. Blood Plasma Preparation All blood samples in the study were obtained from healthy participants. The collection of blood samples from participants was performed using hematology tubes. The tubes were subjected to a gentle mixing process to prevent bubbling. Following this, they were incubated at room temperature for a period of 30 min. Subsequently, centrifugation was performed at a speed of 1100 rcf for 10 min at room temperature to facilitate plasma separation. The next step involved transferring the plasma into 1 mL tubes and placing them in a Kelvinator operating at −80 °C for preservation. Prior to the isolation of N-glycans, the sample underwent thawing, followed by the extraction of a plasma aliquot for the analysis and subsequent freezing of the remaining sample. 2.2.2. Isolation of N-Glycans A total volume of 4 μL of a 2% SDS solution dissolved in water was introduced into a PCR tube containing 2 μL of human plasma. The contents were mixed thoroughly using a pipette. Subsequently, the tubes containing the samples were put into an amplifier, with the reaction mixture being maintained at a temperature of 65 °C and the lid temperature set at 104 °C for 10 min. Next, an aliquot of 2 μL from an aqueous solution containing 8% Igepal CA-630 was added and mixed meticulously through pipetting. The resulting mixture was left on an orbital shaker at 450 rpm for 3 min to induce protein denaturation. While the mixture was being incubated on the shaker, the necessary steps were taken to prepare the PNGase F solution for glycosylation: 2 μL of 5×PBS with 0.12 μL of PNGase F per sample were mixed. A volume of 2 μL from this solution was introduced into the blood plasma sample. The mixture was thoroughly agitated, sealed, and incubated within the amplifier for 3 h (at a reaction mixture temperature of 37 °C and lid temperature of 104 °C). During this stage, plasma proteins were subjected to deglycosylation. Once the deglycosylation process was complete, a volume of 2 µL of the reaction mixture was transferred into a new PCR tube. Following that, a mixture of 2 μL APTS (30 mM APTS in 3.6 M citric acid) and 2 μL of 2-picoline borane (1.2 M in DMSO) was carefully added to the tube by gently pouring it along the inner wall. Once sealed, the tube was vortexed for 15 s at the lowest speed and then placed in an amplifier for 16 h. The reaction occurred at a temperature of 37 °C, while the lid temperature was maintained at 104 °C. This procedure resulted in fluorescently labeled N-glycans. The reaction was halted by introducing 100 μL of cold 80% acetonitrile in water, and the tubes were then refrigerated until the columns were prepared for glycan purification (incubation at a maximum of 37 °C for 1 h). The column preparation involved the use of 200 μL of solvent, which was a 10% BioGel P10 solution made up of water, acetonitrile, and 96% ethanol in a ratio of 7:2:1. This solvent was applied to a membrane filter column and then centrifuged at 1000 rcf for 1 min to eliminate the solvent. Next, 200 μL of water was added to each column, and the columns were centrifuged once more at 1000 rcf for 1 min. The step of rinsing the columns with water was performed two more times, bringing the total number of water rinses to three. Then, 200 μL of an 80% acetonitrile solution in water was added to the column, followed by centrifugation at 1000 rcf for 1 min to eliminate the solvent. Thorough cleaning was ensured by washing the columns with 80% acetonitrile three times, with two additional repetitions of this step. The completion of these procedures made the columns suitable for purifying glycans. The reaction mixture containing APTS-labeled N-glycans and excess dye was vigorously mixed using a vortex, following which the entire volume of the mixture (106 μL) was carefully transferred to the prepared column. The column was then placed on an orbital shaker, operating at a speed of 100 rpm, for 5 min. To remove the solvent from the column, a centrifugation step was used, which lasted for 1 min at a speed of 200 rcf. Next, to remove excess dye, the column was washed with 200 μL of an 80% acetonitrile solution in water, which contained 100 mM trimethylamine. The column was placed on an orbital shaker (100 rpm) for 2 min and then centrifuged to remove the solvent (1 min, 200 rcf). The part of the procedure from the addition of 200 µL of 80% acetonitrile in water solution containing 100 mM trimethylamine until centrifugation was repeated 4 more times. As a result, the column should be subjected to 5 washes with this solution. Afterwards, the N-glycan sample was washed to ensure the complete removal of trimethylamine. For this purpose, a 200 μL aliquot of an 80% acetonitrile solution in water was introduced into the column. The column was subsequently positioned on an orbital shaker set at 100 rpm for a duration of 2 min, followed by centrifugation at 200 rcf for 1 min to eliminate the solvent. Following the addition of 200 µL of an 80% acetonitrile solution in water, the centrifugation step should be repeated twice. Additionally, the column must be washed a total of three times with this solution. Elution was carried out by applying 50 μL of water to the column, followed by placing the column on an orbital shaker set at 100 rpm for 5 min. Following that, the column underwent centrifugation at a speed of 200 rcf for a duration of 2 min, resulting in the collection of the eluent in a sterile tube. Next, 100 μL of water was applied to the column. The column was placed on an orbital shaker (100 rpm) for 5 min and then centrifuged (2 min, 200 rcf). The eluent was collected into a tube with the previous fraction. The process was subsequently repeated once more. The total volume of solution containing the N-glycan fraction in the tube should amount to approximately 200 µL. 2.2.3. Separation of the N-Glycan Fraction by Capillary Gel Electrophoresis A working mixture consisting of 3 µL of N-glycan fraction, 1 µL of LIZ500 size standard diluted to 1:50, and 6 µL of HiDi formamide (total volume of 10 µL) was prepared for analysis. The separation of fluorescently labeled N-glycans was performed on a 3130XL genetic analyzer calibrated using the G5 dye set in an 80 cm long 16-capillary array filled with NimaPOP-7 polymer (NimaGene). A standard electrophoresis module (GeneScan 80 POP7) for fragment analysis on the 80 cm capillary assembly was modified as follows. The oven temperature parameter was decreased to 30 °C. Injection voltage and time were increased to 15 kV and 20 s, respectively. After the injection of the samples prepared in HiDi formamide, capillary electrophoresis was conducted at a standard voltage of 14.6 kV and a modified oven temperature of 30 °C. Data were collected over 7000 s and then analyzed. 2.2.4. Electropherogram Analysis The data generated by the sequencer in ABIF format were used to extract “DATA 1” and “DATA 105”, corresponding to a wavelength with maxima at 522 nm and 655 nm for APTS-labeled N-glycome specter and LIZ-labeled oligonucleotide comigrated standard, respectively. Subsequently, migration scale normalization was carried out using the migration data of the oligonucleotide standard, and the target region was precisely excised within a range of 150 to 350 nucleotides. The signal was denoized using a one-dimensional Gaussian filter with a standard deviation equal to 5 measurement points. Subsequently, baseline correction was carried out utilizing the ARPLS method . The determination of peak maxima involved identifying local maxima in the processed spectra, with a height and topological prominence of no less than 1% of the highest point in the glycomic region. The peak boundaries were determined by taking into account the standard deviation of the Gaussian curve fitted to the peak, with a range of ±3 standard deviations from the center point of the maximum. Subsequently, the area of the peak was calculated using Simpson’s integration method. Detected peaks were annotated by comparison to a reference spectrum from Reiding et al. . The annotation process involved aligning the spectra using DTW. A match between a pair of peaks was established if an optimal warping path connected these two peaks. Additionally, manual peak matching was performed through visual inspection. Structures corresponding to matched peaks in Reiding et al.’s reference spectrum were then assigned to the corresponding peaks detected in this study.
All blood samples in the study were obtained from healthy participants. The collection of blood samples from participants was performed using hematology tubes. The tubes were subjected to a gentle mixing process to prevent bubbling. Following this, they were incubated at room temperature for a period of 30 min. Subsequently, centrifugation was performed at a speed of 1100 rcf for 10 min at room temperature to facilitate plasma separation. The next step involved transferring the plasma into 1 mL tubes and placing them in a Kelvinator operating at −80 °C for preservation. Prior to the isolation of N-glycans, the sample underwent thawing, followed by the extraction of a plasma aliquot for the analysis and subsequent freezing of the remaining sample.
A total volume of 4 μL of a 2% SDS solution dissolved in water was introduced into a PCR tube containing 2 μL of human plasma. The contents were mixed thoroughly using a pipette. Subsequently, the tubes containing the samples were put into an amplifier, with the reaction mixture being maintained at a temperature of 65 °C and the lid temperature set at 104 °C for 10 min. Next, an aliquot of 2 μL from an aqueous solution containing 8% Igepal CA-630 was added and mixed meticulously through pipetting. The resulting mixture was left on an orbital shaker at 450 rpm for 3 min to induce protein denaturation. While the mixture was being incubated on the shaker, the necessary steps were taken to prepare the PNGase F solution for glycosylation: 2 μL of 5×PBS with 0.12 μL of PNGase F per sample were mixed. A volume of 2 μL from this solution was introduced into the blood plasma sample. The mixture was thoroughly agitated, sealed, and incubated within the amplifier for 3 h (at a reaction mixture temperature of 37 °C and lid temperature of 104 °C). During this stage, plasma proteins were subjected to deglycosylation. Once the deglycosylation process was complete, a volume of 2 µL of the reaction mixture was transferred into a new PCR tube. Following that, a mixture of 2 μL APTS (30 mM APTS in 3.6 M citric acid) and 2 μL of 2-picoline borane (1.2 M in DMSO) was carefully added to the tube by gently pouring it along the inner wall. Once sealed, the tube was vortexed for 15 s at the lowest speed and then placed in an amplifier for 16 h. The reaction occurred at a temperature of 37 °C, while the lid temperature was maintained at 104 °C. This procedure resulted in fluorescently labeled N-glycans. The reaction was halted by introducing 100 μL of cold 80% acetonitrile in water, and the tubes were then refrigerated until the columns were prepared for glycan purification (incubation at a maximum of 37 °C for 1 h). The column preparation involved the use of 200 μL of solvent, which was a 10% BioGel P10 solution made up of water, acetonitrile, and 96% ethanol in a ratio of 7:2:1. This solvent was applied to a membrane filter column and then centrifuged at 1000 rcf for 1 min to eliminate the solvent. Next, 200 μL of water was added to each column, and the columns were centrifuged once more at 1000 rcf for 1 min. The step of rinsing the columns with water was performed two more times, bringing the total number of water rinses to three. Then, 200 μL of an 80% acetonitrile solution in water was added to the column, followed by centrifugation at 1000 rcf for 1 min to eliminate the solvent. Thorough cleaning was ensured by washing the columns with 80% acetonitrile three times, with two additional repetitions of this step. The completion of these procedures made the columns suitable for purifying glycans. The reaction mixture containing APTS-labeled N-glycans and excess dye was vigorously mixed using a vortex, following which the entire volume of the mixture (106 μL) was carefully transferred to the prepared column. The column was then placed on an orbital shaker, operating at a speed of 100 rpm, for 5 min. To remove the solvent from the column, a centrifugation step was used, which lasted for 1 min at a speed of 200 rcf. Next, to remove excess dye, the column was washed with 200 μL of an 80% acetonitrile solution in water, which contained 100 mM trimethylamine. The column was placed on an orbital shaker (100 rpm) for 2 min and then centrifuged to remove the solvent (1 min, 200 rcf). The part of the procedure from the addition of 200 µL of 80% acetonitrile in water solution containing 100 mM trimethylamine until centrifugation was repeated 4 more times. As a result, the column should be subjected to 5 washes with this solution. Afterwards, the N-glycan sample was washed to ensure the complete removal of trimethylamine. For this purpose, a 200 μL aliquot of an 80% acetonitrile solution in water was introduced into the column. The column was subsequently positioned on an orbital shaker set at 100 rpm for a duration of 2 min, followed by centrifugation at 200 rcf for 1 min to eliminate the solvent. Following the addition of 200 µL of an 80% acetonitrile solution in water, the centrifugation step should be repeated twice. Additionally, the column must be washed a total of three times with this solution. Elution was carried out by applying 50 μL of water to the column, followed by placing the column on an orbital shaker set at 100 rpm for 5 min. Following that, the column underwent centrifugation at a speed of 200 rcf for a duration of 2 min, resulting in the collection of the eluent in a sterile tube. Next, 100 μL of water was applied to the column. The column was placed on an orbital shaker (100 rpm) for 5 min and then centrifuged (2 min, 200 rcf). The eluent was collected into a tube with the previous fraction. The process was subsequently repeated once more. The total volume of solution containing the N-glycan fraction in the tube should amount to approximately 200 µL.
A working mixture consisting of 3 µL of N-glycan fraction, 1 µL of LIZ500 size standard diluted to 1:50, and 6 µL of HiDi formamide (total volume of 10 µL) was prepared for analysis. The separation of fluorescently labeled N-glycans was performed on a 3130XL genetic analyzer calibrated using the G5 dye set in an 80 cm long 16-capillary array filled with NimaPOP-7 polymer (NimaGene). A standard electrophoresis module (GeneScan 80 POP7) for fragment analysis on the 80 cm capillary assembly was modified as follows. The oven temperature parameter was decreased to 30 °C. Injection voltage and time were increased to 15 kV and 20 s, respectively. After the injection of the samples prepared in HiDi formamide, capillary electrophoresis was conducted at a standard voltage of 14.6 kV and a modified oven temperature of 30 °C. Data were collected over 7000 s and then analyzed.
The data generated by the sequencer in ABIF format were used to extract “DATA 1” and “DATA 105”, corresponding to a wavelength with maxima at 522 nm and 655 nm for APTS-labeled N-glycome specter and LIZ-labeled oligonucleotide comigrated standard, respectively. Subsequently, migration scale normalization was carried out using the migration data of the oligonucleotide standard, and the target region was precisely excised within a range of 150 to 350 nucleotides. The signal was denoized using a one-dimensional Gaussian filter with a standard deviation equal to 5 measurement points. Subsequently, baseline correction was carried out utilizing the ARPLS method . The determination of peak maxima involved identifying local maxima in the processed spectra, with a height and topological prominence of no less than 1% of the highest point in the glycomic region. The peak boundaries were determined by taking into account the standard deviation of the Gaussian curve fitted to the peak, with a range of ±3 standard deviations from the center point of the maximum. Subsequently, the area of the peak was calculated using Simpson’s integration method. Detected peaks were annotated by comparison to a reference spectrum from Reiding et al. . The annotation process involved aligning the spectra using DTW. A match between a pair of peaks was established if an optimal warping path connected these two peaks. Additionally, manual peak matching was performed through visual inspection. Structures corresponding to matched peaks in Reiding et al.’s reference spectrum were then assigned to the corresponding peaks detected in this study.
In order to assess the efficacy of the method proposed, blood samples were collected from four donors. For every sample, blood plasma and an N-glycan fraction were extracted. Each sample underwent a total of four technical repetitions. The electropherograms obtained through fractionation on the sequencer exhibited a signal indicative of human plasma N-glycome. This signal aligns with the reference one , both in terms of the electrophoretic migration pattern within the glycomic region (located between the 150 and 350 nucleotide markers) and the peak magnitudes ( A,B). It is worth noting that the glycomic region in our study starts ~20 nucleotides earlier than that in the reference one. The observed differences in the migration pattern of the LIZ-labeled size ladder relative to the glycan signal between our study and the reference study can be attributed to the reduced field strength employed in our experiments (14.6 kV/80 cm) versus that used in the reference study (15 kV/50 cm). While the mobility of smaller analytes is generally independent of electric field strength, long polymer chains, such as DNA fragments, exhibit altered electrophoretic mobility under higher field strengths due to the biased reptation model . Overall, we identified 30 distinct peaks with heights of at least 1% of the maximum peak height in the glycomic region of at least one electropherogram. Of these, 28 peaks were annotated as corresponding to the structures from Reiding et al. ( C, ). One peak (Peak 0 in C) did not match the reference electropherogram, and another peak (Peak 4) resulted from the overlapping of the tails of two adjacent peaks and therefore was excluded from the downstream analysis. In order to assess the reproducibility of the method, we analyzed the synchrony of the resulting electrophoretic migration patterns and the variability of peak heights and peak areas for individual peaks on four biological samples of human plasma N-glycome obtained through four technical repetitions. The electropherograms were characterized by significant visual resemblance ( ), as well as substantial numerical measures of synchrony: the average Pearson correlation coefficient for the spectra of the glycomic regions was 0.897, and the average median lag of the maximum cross-correlation was 4 measurement points (see ). The variability in peak areas and maxima is similar to that observed in the reference method . Within single biological samples, we calculated the median of the coefficient of variation for maximum height-normalized peak heights and total area-normalized peak areas. For both metrics, the median coefficient of variation did not exceed 0.05. A notable difference can be observed when comparing this with the coefficient of variation for peak areas in the reference study ( ), which is 0.0992. It is important to highlight that the present study exclusively focuses on major peaks within the glycomic region, with heights amounting to at least 1% of the maximum peak height in at least one electropherogram. Notably, the total count of these peaks (29; Peak 4 was not included in the analysis due to it originating from the overlap of the adjacent peak tails) is lower in comparison to the reference study (49). The augmented variability observed in the latter could potentially be attributed to the inclusion of additional and more noisy minor peaks. Therefore, we can confidently state that N-glycans can be effectively extracted from human blood plasma and analyzed using our approach.
We have introduced an effective approach for characterizing human plasma N-glycome, comprising blood sample collection, plasma isolation, and subsequent N-glycan fractionation. The protocol involves the denaturation of glycoproteins in an SDS solution, followed by the liberation of the N-glycan fraction using the PNGase F enzyme. Subsequently, APTS dye is added to label the glycans with fluorescence. The unbound dye is removed through the process of gel filtration. The samples obtained are fractionated and subsequently detected on an ABI 3130xl DNA Analyzer, with the addition of a DNA standard. The main advantage of our method, when compared to the method described in , is the availability of the equipment employed. Specifically, the mini centrifuge designed for plates was substituted with a vortex designed for PCR tubes; the thermostat designed for PCR plates with a temperature gradient was substituted with a PCR amplifier; the 96-well membrane plates intended for sample washing were replaced with membrane columns; and a centrifuge was employed instead of a plate vacuumator. This method allows one to isolate up to 48 N-glycan samples per day and measure up to 384 samples in one run of the instrument. Despite the throughput being slightly inferior to similar techniques (because of the use of tubes instead of 96-well plates in the glycan isolation step), it grants more researchers the ability to study N-glycosylation due to its more straightforward instrumentation. The primary output of this technique is an N-glycome fingerprint, which can be used to analyze the similarities of different biological samples, both in bulk and by individual peaks. By supplying an N-glycome fingerprint with annotations for individual peaks—obtained either through cross-referencing with existing migration pattern databases for N-glycan species or by conduction glycosidase sequencing—researchers can uncover valuable insights into fundamental biological processes, linking alterations in N-glycome profiles to human health and disease. Moreover, the developed protocol can be extended to other N-glycome fractions, enhancing its utility. Specifically, the steps of deglycosylation, fluorescent labeling, and subsequent fractionation by capillary gel electrophoresis enable the generation of distinct electrophoretic migration patterns for individual N-glycoproteins, such as immunoglobulin G (IgG) and transferrin.
The cost-efficiency, combined with the analytical potential, of the developed technique enables more laboratories to conduct frequent N-glycome analyses across various biological contexts and human conditions. The ability to link N-glycome changes to potential biomarkers can greatly enhance our understanding of disease mechanisms and contribute to the development of targeted therapies. Although our sample preparation method resulted in lower yields, it is beneficial in that it allows N-glycome analysis at a reasonable cost.
|
Different antithrombotic strategies after left atrial appendage closure with the LACbes occluder: protocol of the DAAL trial | 89223a8d-3f39-40ae-b897-a95ef2938047 | 11927426 | Cardiovascular System[mh] | Non-valvular atrial fibrillation (NVAF) is the most common arrhythmia in clinical practice. The latest data show that the prevalence of atrial fibrillation in China is as high as 1.6%. With the ageing of the population, it is speculated that the number of NVAF patients will still increase significantly in the next few years. Compared with the non-AF population, patients with NVAF have a fivefold increased risk of ischaemic stroke and systemic embolism, accounting for 20% of all ischaemic stroke events, and have a worse prognosis than patients with stroke from other causes. Currently, the CHA 2 DS 2 -VASc score is recommended to assess the risk of stroke caused by NVAF in patients. Men with a score ≥2 or women with a score ≥3 should take oral anticoagulants (OACs) for life to reduce the risk of stroke. OACs include vitamin K antagonists and direct oral anticoagulants (DOACs), such as thrombin inhibitors (dabigatran) and Xa inhibitors (rivaroxaban, edoxaban, etc). DOACs show more stable anticoagulant properties than the vitamin K antagonist warfarin. However, due to the increased risk of bleeding, their widespread clinical use is limited. The annual incidence of major bleeding and minor bleeding caused by DOACs is 1.5–3.6% and 15–20%, respectively, so the use of DOACs in patients with a high bleeding risk should be more cautious. More than 90% of thrombi in NVAF patients originate from the left atrial appendage (LAA). Therefore, in recent years, left atrial appendage closure (LAAC) has been used to prevent thrombus detachment. This technique can achieve the same preventive effect as OACs and significantly reduce the risk of disability or death caused by thromboembolism in NVAF patients. In addition, LAAC has significant advantages in reducing the risk of bleeding. However, to prevent device-related thrombosis (DRT) and promote endothelialisation, patients still need antithrombotic therapy in the short term after LAAC. Although European and American guidelines and LAAC expert consensus in China have proposed corresponding antithrombotic strategy recommendations according to different occluder types, with anticoagulation preferentially recommended, recently published real-world data suggest that the strategy has not been widely used. In China, LAAC operators often choose medication regimens considering patients’ wishes and the risks of bleeding and stroke. Current statistics show that antithrombotic therapy is still dominated by DOACs, accounting for approximately 80%, which will undoubtedly pose problems in patients who cannot tolerate anticoagulation or need combined antiplatelet drugs, such as patients managed after coronary heart disease intervention. With the continuous practice of LAAC worldwide and new breakthroughs in device development, the antithrombotic management strategy of NVAF patients receiving LAAC also needs to be optimised accordingly. The LACbes occluder is one of the LAA occluders developed independently in China, which has gone through the stages of appearance design, barb development, finalisation, animal experiments and pre-market clinical research. It was approved and available only in China, and went public in 2019. The LACbes occluder was approved for patients who cannot tolerate anticoagulation in the long term, and also used for patients as an alternative for anticoagulation. It has a stable structure, good positioning and sealing effect on the LAA cavity wall, and is easy to be repeatedly operated. It can be introduced into the delivery catheter several times and fully recovered without damaging the barb and the catheter. The location area can be selected according to the actual shape and size of the LAA, so as to shorten the operation time and reduce the risk of operation. However, in terms of postoperative medication, there is no evidence-based guideline on the best strategy of antithrombotic therapy after LAAC implantation of LACbes occluders, and different medical centres can only refer to other occluders for empirical anticoagulation or antiplatelet therapy in clinical practice. A total of 98 LAAC patients who used LACbes blockers and completed 3 months of postoperative transoesophageal echocardiography (TEE) follow-up in our centre from 2020 to 2021 were retrospectively analysed. 61 patients in the DOAC group were treated with DOAC, while 27 patients in the DAPT group were treated with DAPT. TEE examination at 3 months after surgery showed that the incidence of DRT in the two groups was 6.6% and 3.7%, respectively, and the difference was not statistically significant (p=0.76, see ). In the above retrospective analysis of our small sample, there was no significant difference in DRT formation between the two antithrombotic regimens, DAPT and DOAC, after the use of the LACbes occluder for LAAC, but the level of evidence was low due to the small sample and non-randomised controlled study. Therefore, we propose to conduct a prospective clinical study to compare the efficacy and safety of two antithrombotic strategies, DAPT including aspirin and ticagrelor and DOAC with rivaroxaban, after LAAC with a LACbes occluder using a randomised, controlled, open-label methodology, with the aim of providing strong evidence-based results for the optimal postoperative antithrombotic strategy for LACbes occluders.
Patient selection This study intends to enrol 296 consecutive patients with atrial fibrillation after LACbes LAAC. Patient selection criteria are presented in . Study design This study is a prospective, randomised, controlled, multicentre clinical trial that will compare the efficacy and safety of two different antithrombotic strategies, DAPT versus DOAC, after left atrial appendage closure with the LACbes occluder. The trial plans to enrol 296 subjects with NVAF who have successfully completed LAAC and randomly assign patients to the enrolment using a multicentre competitive enrolment and central randomisation system. The central randomisation system was designed by the trial statistician. After determining the indication for LAAC, preoperative TEE or cardiac computed tomography angiography (CTA) will be performed to exclude thrombus and to measure and assess LAA anatomy. If the patient is a appropriate candidate for LAAC with LACbes occluder, the study will be explained and informed consent ( ) will be signed. Patients will be randomised into two groups on a 1:1 basis after the successful LACbes occlusion procedure. The randomisation process will be achieved via the internet with a centralised randomisation system designed by the statistical side of the trial. Patients will not be included if the procedure fails (device not implanted or complications in the immediate postoperative period of the procedure). According to the results of randomisation, patients in the DAPT group will be given aspirin 100 mg+clopidogrel 75 mg/day for 12 weeks, and patients in the DOAC group will be given rivaroxaban (15 mg/day) for 12 weeks. Conditions requiring aspirin use, such as acute coronary syndrome (ACS), within 3 months after LAAC will be reported as adverse events. After completion of the 12-week follow-up visit to rule out DRT, both groups will be switched to DAPT until 6 months postoperatively, and then switched to single antiplatelet therapy. After 12 months, the surgeon will decide whether the patient should be maintained on aspirin therapy. The investigators will record baseline data within 24 hours after the procedure and relevant follow-up information at 3, 6 and 12 months after the procedure to investigate differences in the incidence of comparative occluder-associated thrombosis, clinical thrombotic events, other thrombotic events and bleeding events. The patient enrolment schemes are shown in . Endpoints Primary endpoints The primary efficacy endpoint is the 12-month freedom from major adverse clinical events in both groups, including stroke/transient ischaemic attack, other thromboembolic events, device-related thrombotic events and all-cause mortality. The primary safety endpoint comprises bleeding events (referred to Bleeding Academic Research Consortium (BARC) criteria ≥3 a) at 3 months after surgery. Secondary endpoints Device-related thrombotic events at 3 months after surgery. Incidence of minor bleeding at 6 and 12 months. Degree of endothelialisation at 6 and 12 months. The incidence of complete endothelialisation will be evaluated by CTA, which was defined as a radiation density CT value of less than 100 HU in the atrial appendage or less than 25% of the left atrial CT value. Postoperative follow-up and evaluation/endpoints Clinical follow-up, including clinical visits and physical examinations, will be performed at 3, 6 and 12 months. Clinical visits will include specific assessments of thrombotic and bleeding events as well as general adverse events and serious adverse events. During these clinical visits, special questions will be asked about concomitant therapy, subject discontinuation following any use of the study drug, consent withdrawal, risk of overdose or pregnancy. At 3, 6 and 12 months, we will perform general laboratory tests, including haemoglobin levels, platelet counts, coagulation status and renal function. Imaging follow-up will include TEE (at 3 months after surgery) and CTA (at 3, 6 and 12 months). Antithrombotic therapy will be continued, stopped or changed according to the treating physician’s criteria in the event of any clinical thromboembolic event, device-related thrombotic event or major bleeding. The outcome and event definitions are shown in . Sample size justification A total of 296 patients are expected to be enrolled in this trial and randomised into the study or control group at a 1:1 ratio, with 148 cases in each group. The sample size calculation is based on the primary evaluation measure, that is, no occurrence of major clinical events. Considering the available clinical evidence and the experience of clinical experts, it is assumed that the control group would have a 95% non-incidence rate of major clinical events, and it is expected that the test group would be able to achieve the same level of safety with the application of the test product. As determined by clinical discussions, the non-inferiority cut-off value has been set at 8%, referring to the study initiated by the Structural Heart Disease Center of Fuwai Hospital which also used antithrombotic drugs after disk occluder implantation and taking into account our prior clinical experience and practical implications. The significance level of the statistical test has been adopted at 0.025 unilaterally; the certainty level has been accepted at 90%. According to the maximum possible fall-out rate of 10% in the study, 148 patients need to be enrolled in each group based on the principle of statistics, and the total number of cases in the two groups will be 296. The corresponding sample size calculation formula is as follows: n = [ μ 1 − α 2 p ¯ ( 1 − p ¯ ) + μ 1 − β p T ( 1 − p T ) + p C ( 1 − p C ) ] 2 ( Δ − ( p T − p C ) ) 2 p T in the formula corresponds to the non-incidence of major clinical events in the test group, p C represents the non-incidence level of major clinical events in the control group, p ¯ represents the non-incidence rate of average major clinical events in the two groups, Δ corresponds to the non-inferiority margin, μ represents the quantile of the standard normal distribution, α corresponds to the type I error level of the statistical test and 0.025 (one-sided) is taken here, while β corresponds to the type II error level of the test, and 0.1 (corresponding to the 90% power level) is taken for calculation. No additional cases will be added after certain subjects have withdrawn for various reasons. Data management Data will be collected by the investigators from each participating institution and then uploaded and stored on the electronic data capture system by the clinical research coordinator (CRC) to protect confidentiality before, during and after the trial. The database will not be unblinded until protocol violations have been identified, data collection has been declared as complete and the medical and scientific review has been completed. The final dataset will be encrypted and stored in an online database accessible only to the main researchers and administrators. All study-related information will be stored securely at the study site. All participant information will be stored in locked file cabinets in areas with limited access. All laboratory specimens, reports, data collection, process and administrative forms will be identified by a coded ID number only to maintain participant confidentiality. All records that contain names or other personal identifiers, such as locator forms and informed consent forms, will be stored separately from study records identified by code number. All local databases will be secured with password-protected access systems. Forms, lists, logbooks, appointment books and any other listings that link participant ID numbers to other identifying information will be stored in a separate, locked file in an area with limited access. Data analysis Efficacy analysis will be performed by intent-to-treat set, which consisted of all randomised patients. All results of the efficacy analysis will be analysed in the full analysis set (FAS) and per-protocol set (PPS), which included all randomised patients without major protocol deviations. Descriptions of quantitative indicators will include the mean, SD, median, minimum, maximum, lower quartile (Q1) and upper quartile (Q3). Descriptions of categorical indicators will include the number and percentage of each type. Statistical tests will first apply parametric statistical methods; if the data distribution is markedly different from the distribution assumed by the statistical tests, non-parametric statistical methods will be used. The primary evaluation indicators will adopt a one-sided 0.025 significance level, whereas other statistical tests, if not otherwise specified, will adopt a two-sided test with a significance level of 0.05. Two-sided 95% CIs will be calculated. Pearson’s χ 2 or Fisher’s exact probability tests will be performed on the freedom from major adverse events between the test and control groups, and point estimates and CIs for the difference in rates between groups will be calculated using the Newcombe-Wilson method. If the lower CI of the rate difference is >−8%, the null hypothesis will be rejected, and the test group will be considered non-inferior to the control group; if the lower CI of the rate difference is >0, the test group will be considered superior to the control group. HR point estimates and 95% CIs will also be calculated for the relative risk of incidence in both groups. The primary efficacy evaluation will be based on the FAS and PPS, and other efficacy evaluations will be based on the FAS. Safety evaluations will be performed on the safety analysis set. Analysis of the safety parameters will be conducted as follows: The vital signs, laboratory tests and other adverse events that were normal before treatment and abnormal after treatment will be described, and the number of cases and the proportions will be listed. Groupings according to the number and incidence of all adverse events and serious adverse events, as well as according to the number and incidence of device-related adverse events and serious adverse events, will be conducted. Moreover, the specific manifestations and degree of all adverse events that occur in each group and their relationship with the investigational device shall be described in detail. The incidences of device-related adverse events and device malfunctions will be compared between the test group and the control group. The statistical analysis involves relevant links that are consistent with ICH E9 (The International Council for Harmonisation of Technical Requirements for Pharmaceuticals for Human Use - Statistical principles for clinial trials E9) Statistical principles for clinial trials and relevant requirements in the Biostatistics Guidelines for Clinical Trials issued by the National Medical Products Administration (NMPA). After the study protocol is determined, the statistical analysis plan will be prepared by the statistician in consultation with the principal investigator. Each evaluation indicator and corresponding statistical analysis method in this trial will be described in detail in the statistical analysis plan. For missing data that may occur in the trial, a relatively conservative approach will be used to impute the primary analysis. Other missing indicators will not be imputed. For the processing of incorrect data and unreasonable data, logical verification and quality management will be performed for data in the database during data management. Queries will be used to raise questions to the investigator in case of any wrong data or unreasonable data, and the unreasonable data will be adjusted according to the investigator’s written reply until all unreasonable data or wrong data are resolved before locking the database. Study organisation The study steering committee is responsible for managing the scientific aspects of the study and formed by principal investigators of each participating institution and representatives from the sponsor and from the clinical research organisation (CRO). The study steering committee interacts with the sponsor and the CRO on study progress and related issues. Of note, as an investigator-sponsored research programme, the manufacturer (Shanghai Pushi Medical Instrument) of the LACbes occluder is not a participant in the design, conduct, data collection and statistical analysis of the study. The manufacturer only provides technical and coordination support to this study. An independent clinical events committee (CEC) is responsible for adjudicating events that are reported during this clinical trial. The CEC consists of three independent members, including two cardiologists and one neurologist. The CEC is blinded to the patient’s treatment arm for the adverse events they are adjudicating. In addition, an independent data monitoring committee (DMC) has been established, including two cardiologists and one biostatistician. The DMC holds meetings periodically to review study data. The DMC may recommend stopping the study early if the observed event rate is deemed to be unacceptable, and may also recommend the protocol be revised if deemed necessary to maintain the safety and welfare of the subjects involved. DMC also has the right to unblind the patient and the investigator when the patient has serious adverse events suspected to be related to the DAPT or DOAC. Patient and public involvement Neither patients nor the public will directly be involved in the design or conduct of this study, nor are they invited to participate in the writing or editing of this document. The recruitment of patients to participate in the study is based on their eligibility and participation protocol (signed informed consent). When signing the informed consent form, all participants will be asked if they wish to be informed about the results of the trial. They will receive a summary of the results, if desired.
This study intends to enrol 296 consecutive patients with atrial fibrillation after LACbes LAAC. Patient selection criteria are presented in .
This study is a prospective, randomised, controlled, multicentre clinical trial that will compare the efficacy and safety of two different antithrombotic strategies, DAPT versus DOAC, after left atrial appendage closure with the LACbes occluder. The trial plans to enrol 296 subjects with NVAF who have successfully completed LAAC and randomly assign patients to the enrolment using a multicentre competitive enrolment and central randomisation system. The central randomisation system was designed by the trial statistician. After determining the indication for LAAC, preoperative TEE or cardiac computed tomography angiography (CTA) will be performed to exclude thrombus and to measure and assess LAA anatomy. If the patient is a appropriate candidate for LAAC with LACbes occluder, the study will be explained and informed consent ( ) will be signed. Patients will be randomised into two groups on a 1:1 basis after the successful LACbes occlusion procedure. The randomisation process will be achieved via the internet with a centralised randomisation system designed by the statistical side of the trial. Patients will not be included if the procedure fails (device not implanted or complications in the immediate postoperative period of the procedure). According to the results of randomisation, patients in the DAPT group will be given aspirin 100 mg+clopidogrel 75 mg/day for 12 weeks, and patients in the DOAC group will be given rivaroxaban (15 mg/day) for 12 weeks. Conditions requiring aspirin use, such as acute coronary syndrome (ACS), within 3 months after LAAC will be reported as adverse events. After completion of the 12-week follow-up visit to rule out DRT, both groups will be switched to DAPT until 6 months postoperatively, and then switched to single antiplatelet therapy. After 12 months, the surgeon will decide whether the patient should be maintained on aspirin therapy. The investigators will record baseline data within 24 hours after the procedure and relevant follow-up information at 3, 6 and 12 months after the procedure to investigate differences in the incidence of comparative occluder-associated thrombosis, clinical thrombotic events, other thrombotic events and bleeding events. The patient enrolment schemes are shown in .
Primary endpoints The primary efficacy endpoint is the 12-month freedom from major adverse clinical events in both groups, including stroke/transient ischaemic attack, other thromboembolic events, device-related thrombotic events and all-cause mortality. The primary safety endpoint comprises bleeding events (referred to Bleeding Academic Research Consortium (BARC) criteria ≥3 a) at 3 months after surgery. Secondary endpoints Device-related thrombotic events at 3 months after surgery. Incidence of minor bleeding at 6 and 12 months. Degree of endothelialisation at 6 and 12 months. The incidence of complete endothelialisation will be evaluated by CTA, which was defined as a radiation density CT value of less than 100 HU in the atrial appendage or less than 25% of the left atrial CT value.
The primary efficacy endpoint is the 12-month freedom from major adverse clinical events in both groups, including stroke/transient ischaemic attack, other thromboembolic events, device-related thrombotic events and all-cause mortality. The primary safety endpoint comprises bleeding events (referred to Bleeding Academic Research Consortium (BARC) criteria ≥3 a) at 3 months after surgery.
Device-related thrombotic events at 3 months after surgery. Incidence of minor bleeding at 6 and 12 months. Degree of endothelialisation at 6 and 12 months. The incidence of complete endothelialisation will be evaluated by CTA, which was defined as a radiation density CT value of less than 100 HU in the atrial appendage or less than 25% of the left atrial CT value.
Clinical follow-up, including clinical visits and physical examinations, will be performed at 3, 6 and 12 months. Clinical visits will include specific assessments of thrombotic and bleeding events as well as general adverse events and serious adverse events. During these clinical visits, special questions will be asked about concomitant therapy, subject discontinuation following any use of the study drug, consent withdrawal, risk of overdose or pregnancy. At 3, 6 and 12 months, we will perform general laboratory tests, including haemoglobin levels, platelet counts, coagulation status and renal function. Imaging follow-up will include TEE (at 3 months after surgery) and CTA (at 3, 6 and 12 months). Antithrombotic therapy will be continued, stopped or changed according to the treating physician’s criteria in the event of any clinical thromboembolic event, device-related thrombotic event or major bleeding. The outcome and event definitions are shown in .
A total of 296 patients are expected to be enrolled in this trial and randomised into the study or control group at a 1:1 ratio, with 148 cases in each group. The sample size calculation is based on the primary evaluation measure, that is, no occurrence of major clinical events. Considering the available clinical evidence and the experience of clinical experts, it is assumed that the control group would have a 95% non-incidence rate of major clinical events, and it is expected that the test group would be able to achieve the same level of safety with the application of the test product. As determined by clinical discussions, the non-inferiority cut-off value has been set at 8%, referring to the study initiated by the Structural Heart Disease Center of Fuwai Hospital which also used antithrombotic drugs after disk occluder implantation and taking into account our prior clinical experience and practical implications. The significance level of the statistical test has been adopted at 0.025 unilaterally; the certainty level has been accepted at 90%. According to the maximum possible fall-out rate of 10% in the study, 148 patients need to be enrolled in each group based on the principle of statistics, and the total number of cases in the two groups will be 296. The corresponding sample size calculation formula is as follows: n = [ μ 1 − α 2 p ¯ ( 1 − p ¯ ) + μ 1 − β p T ( 1 − p T ) + p C ( 1 − p C ) ] 2 ( Δ − ( p T − p C ) ) 2 p T in the formula corresponds to the non-incidence of major clinical events in the test group, p C represents the non-incidence level of major clinical events in the control group, p ¯ represents the non-incidence rate of average major clinical events in the two groups, Δ corresponds to the non-inferiority margin, μ represents the quantile of the standard normal distribution, α corresponds to the type I error level of the statistical test and 0.025 (one-sided) is taken here, while β corresponds to the type II error level of the test, and 0.1 (corresponding to the 90% power level) is taken for calculation. No additional cases will be added after certain subjects have withdrawn for various reasons.
Data will be collected by the investigators from each participating institution and then uploaded and stored on the electronic data capture system by the clinical research coordinator (CRC) to protect confidentiality before, during and after the trial. The database will not be unblinded until protocol violations have been identified, data collection has been declared as complete and the medical and scientific review has been completed. The final dataset will be encrypted and stored in an online database accessible only to the main researchers and administrators. All study-related information will be stored securely at the study site. All participant information will be stored in locked file cabinets in areas with limited access. All laboratory specimens, reports, data collection, process and administrative forms will be identified by a coded ID number only to maintain participant confidentiality. All records that contain names or other personal identifiers, such as locator forms and informed consent forms, will be stored separately from study records identified by code number. All local databases will be secured with password-protected access systems. Forms, lists, logbooks, appointment books and any other listings that link participant ID numbers to other identifying information will be stored in a separate, locked file in an area with limited access.
Efficacy analysis will be performed by intent-to-treat set, which consisted of all randomised patients. All results of the efficacy analysis will be analysed in the full analysis set (FAS) and per-protocol set (PPS), which included all randomised patients without major protocol deviations. Descriptions of quantitative indicators will include the mean, SD, median, minimum, maximum, lower quartile (Q1) and upper quartile (Q3). Descriptions of categorical indicators will include the number and percentage of each type. Statistical tests will first apply parametric statistical methods; if the data distribution is markedly different from the distribution assumed by the statistical tests, non-parametric statistical methods will be used. The primary evaluation indicators will adopt a one-sided 0.025 significance level, whereas other statistical tests, if not otherwise specified, will adopt a two-sided test with a significance level of 0.05. Two-sided 95% CIs will be calculated. Pearson’s χ 2 or Fisher’s exact probability tests will be performed on the freedom from major adverse events between the test and control groups, and point estimates and CIs for the difference in rates between groups will be calculated using the Newcombe-Wilson method. If the lower CI of the rate difference is >−8%, the null hypothesis will be rejected, and the test group will be considered non-inferior to the control group; if the lower CI of the rate difference is >0, the test group will be considered superior to the control group. HR point estimates and 95% CIs will also be calculated for the relative risk of incidence in both groups. The primary efficacy evaluation will be based on the FAS and PPS, and other efficacy evaluations will be based on the FAS. Safety evaluations will be performed on the safety analysis set. Analysis of the safety parameters will be conducted as follows: The vital signs, laboratory tests and other adverse events that were normal before treatment and abnormal after treatment will be described, and the number of cases and the proportions will be listed. Groupings according to the number and incidence of all adverse events and serious adverse events, as well as according to the number and incidence of device-related adverse events and serious adverse events, will be conducted. Moreover, the specific manifestations and degree of all adverse events that occur in each group and their relationship with the investigational device shall be described in detail. The incidences of device-related adverse events and device malfunctions will be compared between the test group and the control group. The statistical analysis involves relevant links that are consistent with ICH E9 (The International Council for Harmonisation of Technical Requirements for Pharmaceuticals for Human Use - Statistical principles for clinial trials E9) Statistical principles for clinial trials and relevant requirements in the Biostatistics Guidelines for Clinical Trials issued by the National Medical Products Administration (NMPA). After the study protocol is determined, the statistical analysis plan will be prepared by the statistician in consultation with the principal investigator. Each evaluation indicator and corresponding statistical analysis method in this trial will be described in detail in the statistical analysis plan. For missing data that may occur in the trial, a relatively conservative approach will be used to impute the primary analysis. Other missing indicators will not be imputed. For the processing of incorrect data and unreasonable data, logical verification and quality management will be performed for data in the database during data management. Queries will be used to raise questions to the investigator in case of any wrong data or unreasonable data, and the unreasonable data will be adjusted according to the investigator’s written reply until all unreasonable data or wrong data are resolved before locking the database.
The study steering committee is responsible for managing the scientific aspects of the study and formed by principal investigators of each participating institution and representatives from the sponsor and from the clinical research organisation (CRO). The study steering committee interacts with the sponsor and the CRO on study progress and related issues. Of note, as an investigator-sponsored research programme, the manufacturer (Shanghai Pushi Medical Instrument) of the LACbes occluder is not a participant in the design, conduct, data collection and statistical analysis of the study. The manufacturer only provides technical and coordination support to this study. An independent clinical events committee (CEC) is responsible for adjudicating events that are reported during this clinical trial. The CEC consists of three independent members, including two cardiologists and one neurologist. The CEC is blinded to the patient’s treatment arm for the adverse events they are adjudicating. In addition, an independent data monitoring committee (DMC) has been established, including two cardiologists and one biostatistician. The DMC holds meetings periodically to review study data. The DMC may recommend stopping the study early if the observed event rate is deemed to be unacceptable, and may also recommend the protocol be revised if deemed necessary to maintain the safety and welfare of the subjects involved. DMC also has the right to unblind the patient and the investigator when the patient has serious adverse events suspected to be related to the DAPT or DOAC.
Neither patients nor the public will directly be involved in the design or conduct of this study, nor are they invited to participate in the writing or editing of this document. The recruitment of patients to participate in the study is based on their eligibility and participation protocol (signed informed consent). When signing the informed consent form, all participants will be asked if they wish to be informed about the results of the trial. They will receive a summary of the results, if desired.
Ethics approval was obtained from the Ethics Committee of Shanghai Ninth People’s Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China (Approval number SH9H-2022-T426-1) and other participating centres . This clinical study is being conducted in compliance with the Declaration of Helsinki (2013), Good Clinical Practice for Medical Devices and relevant national regulations. Any modifications to the protocol which may impact the conduct of the study, potential benefit of the patient or may affect patient safety, including changes of study objectives, study design, patient population, sample sizes, study procedures or significant administrative aspects, will require a formal amendment to the protocol. Such amendment will be approved by the Ethics Committee prior to implementation and notified to the health authorities in accordance with local regulations. Clinical trial insurance is purchased to provide compensation in the event of physical damage to the participants through the trial as well as in the events of health impairment and death. The results are expected in 2025 and will be disseminated through peer-reviewed journals and presentations at national and international conferences.
10.1136/bmjopen-2024-084351 online supplemental file 1
|
Black phosphorus conjugation of chemotherapeutic ginsenoside Rg3: enhancing targeted multimodal nanotheranostics against lung cancer metastasis | d64d2d2a-61d6-4f92-95cb-cccdec882d5f | 8409949 | Pharmacology[mh] | Introduction Cancer remains a significant cause of death and a major global health problem. Breast cancer is a prevalent form of cancer and is an actual cause of female cancer death (Oku et al., ; Wang et al., ; Lin et al., ; Xu et al., ; Wan & Ping, ). Although primary tumors cause immense morbidity, most people have a recommended forecast after prior tumor removal, even without chemotherapy. Once, however, tumor cells metastasize to restricted organs, including the lungs, bones, and brains. The diagnosis is incurable without available specific medicinal products, with metastatic conditions that account for over 90% of the mortality from breast cancer (Low & Kularatne, ; Liang et al., ; Gao et al., ). Little is clear, though, on the way to metastatic therapy even though ongoing efforts in various fields of nanomedicine have been invested (Song et al., ). Most efforts are focused on nanomedicine for cancer in developing therapeutic techniques to enhance physical–chemical properties, such as the parameters of 3 S (size, structure, and surface) and electrostatic interactions (Nguyen et al., ; Qian et al., ; Xu et al., ). However, nanocomposites' properties have rarely been evaluated as tumor killers or as site-accurate vehicles for tumor control. Recently, black phosphorus (BP) has higher physicochemical properties than other double-dimensional nanocomposites as an emerging star of materials research and has strong promise in a broad range of cancer therapies and diagnoses (Sung et al., ; Suetsugu et al., ; Sang et al., ). In particular, in cancer therapies, BPs have been tested with medicinal and gene administering and photochemotherapy. Previous findings have shown that its excellent drug-loading and delivery abilities have encouraged the cancer-tutoring effectiveness of BP (Ou et al., ; Geng et al., ; Li et al., ). Meanwhile, as described in a recent paper, they have found the unique chemical toxicity of BPs to cancerous cells. However, tremendous obstacles still have to be confronted. In conjunction with the symptoms of tumors, BP-based nanocomposites, versatile chemothermal therapies, and unique treatments for advanced tumors have to be established (especially lung metastasis) (Raucci et al., ). Phosphorus is a crucial element for human beings and is indispensable in cellular and biological components and biodegradability applications that endow BPs with superior biocompatibility and successful interactions with nano-bio and biomedical biodegradability (Li et al., ; Chen et al., ; Wang et al., ). The rationale for BP's tumor location-sensitive investigations as platforms for drug distribution and therapeutic agents are raised by such distinctive features. An multifunctional theranostic nanoagent (MTN)-BPs based nanodrugs for precise nanomedicines against the metastasis in lung cancer has been developed in this study to control the metastasis of breast cancer. Our combined results uncovered a new multifunctional nanocomposite focused on BPs to suppress breast cancer metastasis .
Experimental 2.1. Fabrication of BPsQDs BPs crystals were purchased from Smart-Elements (Vienna, Austria). 1-Methyl-2-pyrrolidinone (NMP, 99.5%, anhydrous,) was obtained from the Sigma-Aldrich Co., LLC (Santa Barbara, CA). All the chemicals used in this study were at the analytical reagent grades. BPsQDs were synthesized with the liquid exfoliation method, as reported in recent report (McAllaster & Cohen, ; Yun et al., ; Tian et al., ). In briefly, 20 mg of BPs powders were dispersed in 20 mL NMP, and the mixture was thereafter sonicated with a sonic tip at an ultrasonic frequency of 19–25 kHz for 4 h with 2 s duration plus the interval of 4 s at a power of 1200 W. Afterwards, the mixture was further sonicated overnight in an ice bath at a power of 300 W. Finally, BPQDs dispersed in supernatant were collected after centrifugations at 7000 rpm for 20 min. Prior to nanocomposite synthesis, the obtained BPQD supernatants were further centrifuged at 12,000 rpm for 20 min, and then washed three times with dichloromethane (DCM) in order to get high-quality BPsQDs. 2.2. Preparation of BP/Cy5.5@PLGA and BPs/G-Rg3@PLGA Polyvinyl alcohol (PVA, MW: 9000–10,000), PLGA (50:50, MW: 40,000–70,000) and DCM solutions were acquired from the Sigma-Aldrich (Shanghai, China). Cy5.5 NIR fluorescence dye was obtained from the Innova Biosciences (Cambridge, UK). The above prepared BPQDs were redisposed in PLGA DCM solution (10 mg/mL, 1 mL) containing 2 mg Cy5.5. Further, G-Rg3 was similarly loaded onto the materials (Mu et al., ; Zhao et al., ; Wu et al., ). After sonication for 5 min by ultrasonic homogenizer (SCIENTZ-1200E, Ningbo, China), the mixtures were re-suspended in 0.5% (w/v) PVA aqueous solutions (10 mL) and afterwards sonicated for 5 min. The emulsions were further stirred overtime at the RT to remove the residual solutions of DCM. Final materials were obtained by centrifuging at 7000 rpm for 15 min and washed twice with deionized (DI) water. 2.3. Morphology and characterization of nanocomposites BPs concentrations were determined by inductively coupled plasma atomic emission spectroscopy (Agilent 8800, Tokyo, Japan), as described in our studies. SEM imaging was performed on a field-emission SEM (NOVA NANOSEM430, FEI, Eindhoven, Netherlands) at 5–10 kV after gold coating for 120 s (EM-SCD500, Leica, Wetzlar, Germany). TEM imaging was assessed using a high resolution JEOL JEM 2010 F TEM (Hitachi Scientific Instruments, Tokyo, Japan). The ultraviolet–visible–near infrared (UV–vis–NIR) absorption spectra were obtained on a UV–vis–NIR spectrometer with integrating sphere attachment (ISR-2600 Plus; Shimadzu UV-2600, Kyoto, Japan). The FTIR spectra were recorded with a Thermo-Nicolet Nexus 6700 FTIR spectrometer (Madison, WI). 2.4. Drug loading content (LC), encapsulation efficiency (EE), and release determination G-Rg3 content was examined by high-performance liquid chromatography (HPLC), as reported (Ou et al., ). To measure LC and EE for G-Rg3 on the nano platform, 0.5 mg BPs/G-Rg3@PLGA nanospheres (NSs) was dissolved in 0.5 mL DCM, and DCM was then removed by vacuum drying oven (DZF-6020, Yiheng Ltd, Jinan, China). The residue was re-dissolved in acetonitrile–water (50:50 v/v) solution (1 mL) and stored in a vial for HPLC measurement. The samples were injected into a reverse-phase C-18 column and eluted with a mobile phase consisting of acetonitrile–water (50:50 v/v) at a flow rate of 1.0 mL/min. G-RG3 concentration was analyzed using a UV detector (227 nm). The following equations, respectively calculated LC and EE: LC = weights of the G − Rg 3 in NSs weights of the NSs × 100 % EE = weights of the G − Rg 3 in NSs weights of the feeding G − Rg 3 × 100 % To characterize G-Rg3 release, BPs/G-Rg3@PLGA (5 mg) were dispersed in phosphate buffer solution (PBS, 1 mL) at pH 6.5 containing 0.1% w/v Tween 80. The suspended materials were added into a dialysis bag (MWCO = 3500), followed by dialysis in PBS (20 mL) with slow stirring at 200 rpm at 37 °C. At different time points, the outside PBS was replaced with fresh PBS and was subjected to G-Rg3 detection by HPLC. The suspended materials were radiated by NIR laser at 808 nm (1.0 W/cm 2 ) for 5 min before dialysis for the group upon NIR radiation. 2.5. Photothermal performance assessment The photothermal effect of BPs/G-Rg3@PLGA was assessed under 808 nm-continuous wave laser (GCSLS-05-007, Daheng New Epoch Technology, Inc., Beijing, China) at different concentrations for 600 s at a density power of 1.0 W/cm 2 . The temperature change was recorded by an infrared camera thermographic system (FLIR SC 620, FLIR System, Inc., Wilsonville, OR) and thermocouple (TES 1315, TES Electrical Electronic Corp., China) (Shi et al., ; Song et al., ; Wang et al., ). 2.6. Cell culture and animal models Our laboratory established the sub-line 4T1-luc derived from murine 4T1 breast carcinoma cell lines (4T1-luc cells) in our laboratory with standard culture conditions. The Animal Ethics Committee of the Research Center (approved no. 2019; file no. 4587TH) for Cancer Center, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China. BALB/c female mice (7–8 weeks old) were purchased from the Vital River Laboratory Animal Technology Co. Ltd. (Beijing, China). According to previous reports, the 4T1 breast carcinoma lung metastasis model including the orthotopic models and intra tail-vein injections models were set up (Cheng et al., ; Boix-Montesinos et al., ; Jiang et al., ). In the orthotopic model experiments, when the primary tumor volume reached 100 mm 3 , mice with comparable tumor volume were selected and randomly separated into various groups for ensuing the treatments. Tumor volume was examined by calipers and considered affording to the following equations: tumor volume=(length × width 2 )/2. Since our cells were labeled with exotic luciferase genes, metastasis was monitored by bioluminescent imaging in each mouse after intraperitoneal (i.p.) injections of 100 μL d -Luc (15 mg/mL, XenoLight, PerkinElmer, Waltham, MA) on an IVIS Spectrum Imaging System (Caliper Life Sciences, Hopkinton, MA). The photon flux was quantified and analyzed using the total flux with the Spectrum Living Image 4.0 software on an IVIS Spectrum Imaging System (Caliper Life Sciences Inc., Hopkinton, MA). After mice were sacrificed, primary tumors, organs and lungs were dissected and fixed in 4% PBS-buffered paraformaldehyde, followed by hematoxylin and eosin (H&E) staining concerning the standard instructions. Tissue sections were examined by an optical microscope (Axio Scope A1, Carl Zeiss, Inc., Jena, Germany). 2.7. Assessments of in vitro proliferation and cellular uptake Ater various treatments, cells were washed thrice with PBS solutions, and then cell proliferation was determined by Cell Counting Kit-8 (CCK-8) assay following the instructions from the manufacturer (Solarbio, 1000T, Beijing, China). For the co-staining of calcein-AM and propidium iodide (PI), 4T1 cells were first exposed to BPs/G-Rg3@PLGA for 6 h with or without NIR radiation, followed by co-staining with calcein AM and PI. Green color denotes calcein AM staining and red color indicates PI staining, respectively. To determine the cellular uptake of nanocomposites, cells were washed and collected after treatment with Cy5.5-labeled BPs@PLGA at 50 μg/mL for 4 h, followed by fixing in 4% paraformaldehyde for 15 min. Afterwards, cellular nuclei were counterstained with DAPI for 20 min and washed three times with PBS. Finally, fluorescent imaging was recorded on a confocal laser scanning microscope (CLSM) (Mohamed Subarkhan et al., , ; Subarkhan & Ramesh, ; Balaji et al., ; Sathiya Kamatchi et al., ). 2.8. Tissue distribution of nanocomposites in animals 4T1-Luc-derived mice bearing orthotopic tumors were intravenously (i.v.) inoculated with Cy5.5-labeled BPs@PLGA through the tail vein. Primary tumors and various organs were collected and subjected to bio-luminescent and fluorescent imaging on an IVIS System Spectrum Imaging (Caliper Life Sciences Inc., Hopkinton, MA) at different time points ( n = 3 at each point). The photon flux was quantified and analyzed using the total flux with the Spectrum Living Image 4.0 software on an IVIS System Spectrum Imaging. 2.9. Statistical analysis All of the data are represented as the means ± SD. The statistical significance between the measurements was evaluated using Student’s t -test. A p value less than .05 was deliberated statistically significant, whereas a p value less than .01 was deliberated highly significant.
Fabrication of BPsQDs BPs crystals were purchased from Smart-Elements (Vienna, Austria). 1-Methyl-2-pyrrolidinone (NMP, 99.5%, anhydrous,) was obtained from the Sigma-Aldrich Co., LLC (Santa Barbara, CA). All the chemicals used in this study were at the analytical reagent grades. BPsQDs were synthesized with the liquid exfoliation method, as reported in recent report (McAllaster & Cohen, ; Yun et al., ; Tian et al., ). In briefly, 20 mg of BPs powders were dispersed in 20 mL NMP, and the mixture was thereafter sonicated with a sonic tip at an ultrasonic frequency of 19–25 kHz for 4 h with 2 s duration plus the interval of 4 s at a power of 1200 W. Afterwards, the mixture was further sonicated overnight in an ice bath at a power of 300 W. Finally, BPQDs dispersed in supernatant were collected after centrifugations at 7000 rpm for 20 min. Prior to nanocomposite synthesis, the obtained BPQD supernatants were further centrifuged at 12,000 rpm for 20 min, and then washed three times with dichloromethane (DCM) in order to get high-quality BPsQDs.
Preparation of BP/Cy5.5@PLGA and BPs/G-Rg3@PLGA Polyvinyl alcohol (PVA, MW: 9000–10,000), PLGA (50:50, MW: 40,000–70,000) and DCM solutions were acquired from the Sigma-Aldrich (Shanghai, China). Cy5.5 NIR fluorescence dye was obtained from the Innova Biosciences (Cambridge, UK). The above prepared BPQDs were redisposed in PLGA DCM solution (10 mg/mL, 1 mL) containing 2 mg Cy5.5. Further, G-Rg3 was similarly loaded onto the materials (Mu et al., ; Zhao et al., ; Wu et al., ). After sonication for 5 min by ultrasonic homogenizer (SCIENTZ-1200E, Ningbo, China), the mixtures were re-suspended in 0.5% (w/v) PVA aqueous solutions (10 mL) and afterwards sonicated for 5 min. The emulsions were further stirred overtime at the RT to remove the residual solutions of DCM. Final materials were obtained by centrifuging at 7000 rpm for 15 min and washed twice with deionized (DI) water.
Morphology and characterization of nanocomposites BPs concentrations were determined by inductively coupled plasma atomic emission spectroscopy (Agilent 8800, Tokyo, Japan), as described in our studies. SEM imaging was performed on a field-emission SEM (NOVA NANOSEM430, FEI, Eindhoven, Netherlands) at 5–10 kV after gold coating for 120 s (EM-SCD500, Leica, Wetzlar, Germany). TEM imaging was assessed using a high resolution JEOL JEM 2010 F TEM (Hitachi Scientific Instruments, Tokyo, Japan). The ultraviolet–visible–near infrared (UV–vis–NIR) absorption spectra were obtained on a UV–vis–NIR spectrometer with integrating sphere attachment (ISR-2600 Plus; Shimadzu UV-2600, Kyoto, Japan). The FTIR spectra were recorded with a Thermo-Nicolet Nexus 6700 FTIR spectrometer (Madison, WI).
Drug loading content (LC), encapsulation efficiency (EE), and release determination G-Rg3 content was examined by high-performance liquid chromatography (HPLC), as reported (Ou et al., ). To measure LC and EE for G-Rg3 on the nano platform, 0.5 mg BPs/G-Rg3@PLGA nanospheres (NSs) was dissolved in 0.5 mL DCM, and DCM was then removed by vacuum drying oven (DZF-6020, Yiheng Ltd, Jinan, China). The residue was re-dissolved in acetonitrile–water (50:50 v/v) solution (1 mL) and stored in a vial for HPLC measurement. The samples were injected into a reverse-phase C-18 column and eluted with a mobile phase consisting of acetonitrile–water (50:50 v/v) at a flow rate of 1.0 mL/min. G-RG3 concentration was analyzed using a UV detector (227 nm). The following equations, respectively calculated LC and EE: LC = weights of the G − Rg 3 in NSs weights of the NSs × 100 % EE = weights of the G − Rg 3 in NSs weights of the feeding G − Rg 3 × 100 % To characterize G-Rg3 release, BPs/G-Rg3@PLGA (5 mg) were dispersed in phosphate buffer solution (PBS, 1 mL) at pH 6.5 containing 0.1% w/v Tween 80. The suspended materials were added into a dialysis bag (MWCO = 3500), followed by dialysis in PBS (20 mL) with slow stirring at 200 rpm at 37 °C. At different time points, the outside PBS was replaced with fresh PBS and was subjected to G-Rg3 detection by HPLC. The suspended materials were radiated by NIR laser at 808 nm (1.0 W/cm 2 ) for 5 min before dialysis for the group upon NIR radiation.
Photothermal performance assessment The photothermal effect of BPs/G-Rg3@PLGA was assessed under 808 nm-continuous wave laser (GCSLS-05-007, Daheng New Epoch Technology, Inc., Beijing, China) at different concentrations for 600 s at a density power of 1.0 W/cm 2 . The temperature change was recorded by an infrared camera thermographic system (FLIR SC 620, FLIR System, Inc., Wilsonville, OR) and thermocouple (TES 1315, TES Electrical Electronic Corp., China) (Shi et al., ; Song et al., ; Wang et al., ).
Cell culture and animal models Our laboratory established the sub-line 4T1-luc derived from murine 4T1 breast carcinoma cell lines (4T1-luc cells) in our laboratory with standard culture conditions. The Animal Ethics Committee of the Research Center (approved no. 2019; file no. 4587TH) for Cancer Center, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China. BALB/c female mice (7–8 weeks old) were purchased from the Vital River Laboratory Animal Technology Co. Ltd. (Beijing, China). According to previous reports, the 4T1 breast carcinoma lung metastasis model including the orthotopic models and intra tail-vein injections models were set up (Cheng et al., ; Boix-Montesinos et al., ; Jiang et al., ). In the orthotopic model experiments, when the primary tumor volume reached 100 mm 3 , mice with comparable tumor volume were selected and randomly separated into various groups for ensuing the treatments. Tumor volume was examined by calipers and considered affording to the following equations: tumor volume=(length × width 2 )/2. Since our cells were labeled with exotic luciferase genes, metastasis was monitored by bioluminescent imaging in each mouse after intraperitoneal (i.p.) injections of 100 μL d -Luc (15 mg/mL, XenoLight, PerkinElmer, Waltham, MA) on an IVIS Spectrum Imaging System (Caliper Life Sciences, Hopkinton, MA). The photon flux was quantified and analyzed using the total flux with the Spectrum Living Image 4.0 software on an IVIS Spectrum Imaging System (Caliper Life Sciences Inc., Hopkinton, MA). After mice were sacrificed, primary tumors, organs and lungs were dissected and fixed in 4% PBS-buffered paraformaldehyde, followed by hematoxylin and eosin (H&E) staining concerning the standard instructions. Tissue sections were examined by an optical microscope (Axio Scope A1, Carl Zeiss, Inc., Jena, Germany).
Assessments of in vitro proliferation and cellular uptake Ater various treatments, cells were washed thrice with PBS solutions, and then cell proliferation was determined by Cell Counting Kit-8 (CCK-8) assay following the instructions from the manufacturer (Solarbio, 1000T, Beijing, China). For the co-staining of calcein-AM and propidium iodide (PI), 4T1 cells were first exposed to BPs/G-Rg3@PLGA for 6 h with or without NIR radiation, followed by co-staining with calcein AM and PI. Green color denotes calcein AM staining and red color indicates PI staining, respectively. To determine the cellular uptake of nanocomposites, cells were washed and collected after treatment with Cy5.5-labeled BPs@PLGA at 50 μg/mL for 4 h, followed by fixing in 4% paraformaldehyde for 15 min. Afterwards, cellular nuclei were counterstained with DAPI for 20 min and washed three times with PBS. Finally, fluorescent imaging was recorded on a confocal laser scanning microscope (CLSM) (Mohamed Subarkhan et al., , ; Subarkhan & Ramesh, ; Balaji et al., ; Sathiya Kamatchi et al., ).
Tissue distribution of nanocomposites in animals 4T1-Luc-derived mice bearing orthotopic tumors were intravenously (i.v.) inoculated with Cy5.5-labeled BPs@PLGA through the tail vein. Primary tumors and various organs were collected and subjected to bio-luminescent and fluorescent imaging on an IVIS System Spectrum Imaging (Caliper Life Sciences Inc., Hopkinton, MA) at different time points ( n = 3 at each point). The photon flux was quantified and analyzed using the total flux with the Spectrum Living Image 4.0 software on an IVIS System Spectrum Imaging.
Statistical analysis All of the data are represented as the means ± SD. The statistical significance between the measurements was evaluated using Student’s t -test. A p value less than .05 was deliberated statistically significant, whereas a p value less than .01 was deliberated highly significant.
Results and discussion 3.1. Synthesis and characterizations of BPs@PLGA BPs quantum dots (BPsQDs) have been incorporated in poly(lactic-co-glycolic acid) (PLGA), delivering BPs@PLGA, in order to increase drug load capability and tumor targeting (Deng et al., ; Mayorga-Martinez et al., ; Hu et al., ; Biedulska et al., ). PLGA is a form of bio-degradable and bio-compatible polymer approved by the Food and Drug Administration (FDA), which is commonly used for the delivery of nanodrugs and therapeutic agents. A solution with a monodispersity of ∼130 nm (the desired size to focus on the tumor, via the improved permeability and retention (EPR) mechanism as described in the scanning electron microscopy (SEM) and transmission electron microscopy (TEM), the achieved BPs@PLGA shows a unified spherical morphology . Then, ginsenoside Rg3 (G-Rg3), a major chemotherapeutic agent for breast cancer, was loaded into BPs@PLGA. Fourier transform infrared (FT-IR), distinguished by characteristic peaks of the ether bonds (situated about 1731 cm −1 ) for G-Rg3 and the extended vibration peaks for PLGA, are verified as presented in . demonstrates the active encapsulations of G-Rg3. Attributable to the relationship among the carboxyl acid in the ester group and PLGA in G-Rg3, the absorption peak indicates of G-Rg3 were overest in BPs/G-Rg3@PLGA . In addition, 12% and 92%, respectively, have been estimated for the drug LCs and EE of G-Rg3 and BPs@PLGA. 3.2. Chemothermal features of BPs/G-Rg3@PLGA We address the structural features of BPs/G-Rg3@PLGA in the following section. The noticeable illustrative absorption spectral profiles from inside NIR regions due to the innately absorbable features of BPQDs were sufficiently preserved in BPs/G-Rg3@PLGA , which offers the premise of the photothermal regime compared with the G-Rg3 UV–vis–NIR absorption spectra. The photothermal effect was investigated under laser radiation of 808 nm NIR. A quick indentations temperature of ∼25 °C in BPs/G-Rg3@PLGA was exhibited, as shown in , on 20 μg/mL following 600 s NIR radiations and a very fast temperature increase in the solution at 10 μg/mL was also observed. In comparison, the vehicle solution showed an insignificant temperature fluctuation . To support these findings, BP/G-Rg3@PLGA's photothermal efficiency was further supported by the thermal infrared imaging investigation . These data show that PLGA integrations and G-Rg3 loading were not impaired for the optimum photothermal conversion efficiency of BPs substances. In addition, our findings reveal that NIR radiation caused more drastic drug release (pH 6.5) than that without NIR, reflected in the increased accumulation of G-Rg3 from BPs/G-Rg3@PLGA. The collective results demonstrated that combination chemotherapy and thermotherapy, along with the precisely calibrated release of drugs, have great benefits when it comes to increasing temperatures. 3.3. BPs@PLGA-manifested preferential localization in primary and metastatic tumors Breast cancer is a predilection to metastasis to the lungs because of the favorable interaction between cells of the breast cancer and the respective microenvironments of the lung tissue. However, no specific therapeutics have been available to tackle lung metastasis until now, since most therapies are based on primary tumor elimination or degradation to release primary tumor escape from cancer cells. Therefore, we attempted to examine BPs@PLGA lung targetability. To attain this, we have used subline-4T1-luc from 4T1 breast cancer (4T1-luc) cells which, as we have described recently, are highly sensitive to metastatic tendency to the lungs. These NSs have been labeled Cy5.5 in visualizing in vivo BPs@PLGA in order to monitor the fluorescent locations and intensity. The primary organs of the tumors, lung, heart, liver, spleen, and kidney including the fluorescence imaging study had also been obtained at various times during i.v. injections of BP/Cy5.5@PLGA . In these cells, including tumors from 0.5 to 4, 12, and 24 h, a mounting fluorescent signal was detected, and quantified results showed that liver was the primary site of NS accumulations and spleen and kidney, similar to previous studies. In tumors with a remarkable cumulative depositions based upon time-dependent manner , a significant NSs mass was found which suggested the pronounced tumor-dependent tendency of NSs accumulations. The BPs-based nanoagents can be obtained primarily because of the EPR effect for highly effective tumor tissue accumulations. The abnormal vascular framework of tumor tissues allows for nanoparticles to enter the tumor than for other tissue. It is understood that nanoparticles of 10–200 nm have an excellent tendency to the influence of EPR effect on tumors. The EPR effect for treating cancer is used to localize our synthesized BPs nanoagents with a preferred size (∼130 nm) within the tumor. As 4T1-luc cells had an exotic luciferase reporting gene, a tumor cell could be lighted by i.p. d -luciferin administration and an IVIS System of the Spectrum Imaging bio-luminescence examination. As displayed in , the fluorescent of BPs/Cy5.5@PLGA in primary tumors was overlapped with luciferase bio-luminescence and indicates an extremely selective target of these NSs on the primary tumors. In particular, as shown in , we have been investigating luciferase bio-luminescence and BP/Cy5.5@PLGA fluorescence in lung metastatic tumors. The lungs have been shown to overlap these two signals to enhance the robust capacity to penetrate metastatic tumors in the lung (Lee et al., ). The above data thereby show that the BPs@PLGA gives a better benefit for combination chemophotothermal (CPT) treatments provided by the hybrid BP/G-Rg3@PLGA, targeting both primary and metastatic tumors. Based on the composite size and structure (a more efficient tumor accumulations structure and size based on current studies), we hypothesis that the PLGA microencapsulation largely contributes to improving our nanodrugs tumor targetability, given its high EPR effects. Both these contain an improved specific target of the metastasis tumors in the lungs along with our BPs@PLGA. 3.4. BPs/G-Rg3@PLGA suppressed lung metastasis in different representations Xenograft tumors, embedded in the backs of mice/orthotopic designs and focused on primary tumors, are often used in cancer nanomedicine studies, including for metastasis (Cossío et al., ). Numerous prior studies, started eradicating primary tumors to prevent metastasis. More suitable tumor models are needed to improve imitate the accurate biological mechanisms and actions of cancer in lung metastasis. Thus, we evaluated BPs@PLGA antimetastasis potency using different designs of mouse-based lung cancer: the orthotopic model and the 4T1-luc injection model developed in previous studies. First, there was the use of the orthotopic models is an excellent models to achieve the efficiency of chemotherapeutic drug to prevent lung metastasis from the primary tumors of breast cancer. As shown in , G-Rg3 was not significantly inhibited from primary growth of primary tumors with 10 mg/kg of mice weights, much lesser than that utilized in earlier diagnosis and treatment compared with untreated control. Alternatively, BP/G-Rg3@PLGA, which had the G-Rg3 weight, was about 20% less primary tumor growths without NIR radiations compared to untreated control, as indicated by the tumor growth and the tumor weight ( and ). The tumor growth curves reflected the untreated regulation. Combinatory photothermal therapy (PTT) effectively improved, as shown in significantly impaired tumor growth ( and ), the efficacy of tumor kills at BP/G-Rg3@PLGA. Three primary tumors, indicating robust CPT potency, have been completely abated in the BPs/G-Rg3@PLGA + NIR group, . Furthermore, the tumor tissue group of BPs/G-Rg3@PLGA + NIR has been more notably destroyed reflected in the dramatically dying tumor cells and damaged H&E staining . Many of treatments are restricted by the metastasis of the 4T1-luc lung . In the mice that received G-Rg3 and G-Rg3 + NIR ( , for G-Rg3 + NIR), tumor nodule amounts were shown in the lungs to about 39% and 57%, respectively, in untreated group control. In comparison, the tumor nodules in mice treatment with BPs/G-Rg3@PLGA with or without the NIR compared to untreated control mice were decreased by 64% and 89%, respectively , while in mice there had been a significant reduction in tumor nodes with BPs/G-Rg3@PLGA + NIR related to BPs/G-Rg3@PLGA without NIR treatment . In the quantification of bio-luminescence similar variations were identified with the largest decrease in the BP/G-Rg3@PLGA + NIR . No bio-luminescent signal was found on sites other than the lungs which highlighted 4T1-luc strong tendency toward the lungs. Another lung metastasis mouse model was developed, in which four T1-luc cells were injected in the tail vein by mouse, to promote the inhibitory action on metastasis and to eliminate the probability of the inhibitory impact on lung metastasis, caused by the removal of primary tumors (Fang et al., ). In G-Rg3, G-Rg3 + NIR and control group over the time period, fast growth in bio-luminescent intensity has been shown, reflecting the fast localizations of 4T1-luc cells for metastasis lung specimens. In BP/G-Rg3@PLGA, approximately 59% and 96% of the rise in bio-luminescence intensity was reduced in comparison to untreated control by the mice without or with NIR . Notable factors in the BPs/G-Rg3@PLGA + NIR are the enhancement of the bio-luminescent intensity in mice, and only a small, bio-luminescent signal is eliminated from the BPs/G-Rg3@PLGA + NIR combinatory therapeutic inhibition compared with that of other groups . In addition, these major variations have been supported by the direct monitoring of tumor nodule numbers . The survival of such mice has been drastically reduced, particularly in untreated mice, as a result of the successive metastatic tumor formations, apart from the BPs/G-Rg3@PLGA + NIR administration which has shown considerable survival progress. In addition, the corresponding H&E stain has confirmed the drastic reduction in lung metastatic lesions from mice in comparison with other groups, BPs/G-Rg3@PLGA + NIR administered . Significant cellular havoc occurs in metastatic lung tumors from the BPs/G-Rg3@PLGA + NIR-treated mouse, similar to changes in primary tumors, indicate damaged cells by yellow arrow heads in which reveals the considerable toxicities of tumor cells in responses to combined chemothermal therapy. Unlike tumor tissues, different organs like the kidney, spleen, liver and heart in mice have not received obvious morphological lesions following various administrations, as evidenced by H&E staining methods . In addition, in blood cell counting and biochemical blood samples, no irregular parameters were identified . These findings show that BP/G-RG 3@PLGA nanocomposites are extremely biocompatible and combination therapeutics are biologically safe. Combining the nanosystems preferential features – outstanding tropism for primary tumors and metastatic tumors and remarkable photothermal results and PTT chemo-toxicity triggers – our results have shown that BPs/G-Rg3@PLGA + NIR therapies have shown excessive potential in limiting breast to lung cancer metastasis. 3.5. BPs/G-Rg3@PLGA induced marked cell death of 4T1-luc cells We examined in vitro the cytotoxic effects of BPs/G-Rg3@PLGA in order to confirm mechanisms for the increased tumor apoptosis in metastatic tumors and primary in responses to BPs/G-Rg3@PLGA + NIR. BPs@PLGA did not display substantial proliferation in 4T1-luc cells in itself at high concentrations, as the CCK-8 analysis. Subsequently, a CLSM was accomplished to determine the intracellular use of substance in 4T1-luc cells with Cy5.5-labeled BPs@PLGA. reveals that, in comparison with no fluorescent in free Cy5.5 sample-treatment cells BPs@PLGA confirms an extreme red fluorescent in the cytoplasm as a suitable delivery drug for nanodrugs. In addition, 4T1-luc cells were treated in a series of similar treatments in vivo . As seen in , G-Rg3 has caused slight cell death with or without NIR comparison with untreated cells at a concentration of 50 μg/mL. The concentration of G-Rg3 is much lower than in prior findings, suggesting a low proliferation of G-Rg3 itself toward 4T1-luc cells. It should be noted that this concentration is lesser. However, as seen in approximately 64-fold accumulation for PI-positive cells , and 63% reduction in the cell proliferation , a substantial inhibition in BP/G-Rg3@PLGA-treated 4T1-luc cells was observed for drug delivery platform. Through, in cells that responded in a greater way to BP/G-Rg3@PLGA + NIR, with a higher increase than 100-fold in PI-positive cells and >85% decreased in cell proliferation in accordance with the in vivo results mentioned above were more drastic cell growth inhibitions . These in vitro findings have revealed the multimodal mechanisms induced in proliferation by BPs/G-Rg3@PLGA nanocomposites: the nanocomposite-dependent 'Trojan horse' framework for the delivery of chemotherapy agents, along with combinational CPT toxicity, to cause cell apoptosis.
Synthesis and characterizations of BPs@PLGA BPs quantum dots (BPsQDs) have been incorporated in poly(lactic-co-glycolic acid) (PLGA), delivering BPs@PLGA, in order to increase drug load capability and tumor targeting (Deng et al., ; Mayorga-Martinez et al., ; Hu et al., ; Biedulska et al., ). PLGA is a form of bio-degradable and bio-compatible polymer approved by the Food and Drug Administration (FDA), which is commonly used for the delivery of nanodrugs and therapeutic agents. A solution with a monodispersity of ∼130 nm (the desired size to focus on the tumor, via the improved permeability and retention (EPR) mechanism as described in the scanning electron microscopy (SEM) and transmission electron microscopy (TEM), the achieved BPs@PLGA shows a unified spherical morphology . Then, ginsenoside Rg3 (G-Rg3), a major chemotherapeutic agent for breast cancer, was loaded into BPs@PLGA. Fourier transform infrared (FT-IR), distinguished by characteristic peaks of the ether bonds (situated about 1731 cm −1 ) for G-Rg3 and the extended vibration peaks for PLGA, are verified as presented in . demonstrates the active encapsulations of G-Rg3. Attributable to the relationship among the carboxyl acid in the ester group and PLGA in G-Rg3, the absorption peak indicates of G-Rg3 were overest in BPs/G-Rg3@PLGA . In addition, 12% and 92%, respectively, have been estimated for the drug LCs and EE of G-Rg3 and BPs@PLGA.
Chemothermal features of BPs/G-Rg3@PLGA We address the structural features of BPs/G-Rg3@PLGA in the following section. The noticeable illustrative absorption spectral profiles from inside NIR regions due to the innately absorbable features of BPQDs were sufficiently preserved in BPs/G-Rg3@PLGA , which offers the premise of the photothermal regime compared with the G-Rg3 UV–vis–NIR absorption spectra. The photothermal effect was investigated under laser radiation of 808 nm NIR. A quick indentations temperature of ∼25 °C in BPs/G-Rg3@PLGA was exhibited, as shown in , on 20 μg/mL following 600 s NIR radiations and a very fast temperature increase in the solution at 10 μg/mL was also observed. In comparison, the vehicle solution showed an insignificant temperature fluctuation . To support these findings, BP/G-Rg3@PLGA's photothermal efficiency was further supported by the thermal infrared imaging investigation . These data show that PLGA integrations and G-Rg3 loading were not impaired for the optimum photothermal conversion efficiency of BPs substances. In addition, our findings reveal that NIR radiation caused more drastic drug release (pH 6.5) than that without NIR, reflected in the increased accumulation of G-Rg3 from BPs/G-Rg3@PLGA. The collective results demonstrated that combination chemotherapy and thermotherapy, along with the precisely calibrated release of drugs, have great benefits when it comes to increasing temperatures.
BPs@PLGA-manifested preferential localization in primary and metastatic tumors Breast cancer is a predilection to metastasis to the lungs because of the favorable interaction between cells of the breast cancer and the respective microenvironments of the lung tissue. However, no specific therapeutics have been available to tackle lung metastasis until now, since most therapies are based on primary tumor elimination or degradation to release primary tumor escape from cancer cells. Therefore, we attempted to examine BPs@PLGA lung targetability. To attain this, we have used subline-4T1-luc from 4T1 breast cancer (4T1-luc) cells which, as we have described recently, are highly sensitive to metastatic tendency to the lungs. These NSs have been labeled Cy5.5 in visualizing in vivo BPs@PLGA in order to monitor the fluorescent locations and intensity. The primary organs of the tumors, lung, heart, liver, spleen, and kidney including the fluorescence imaging study had also been obtained at various times during i.v. injections of BP/Cy5.5@PLGA . In these cells, including tumors from 0.5 to 4, 12, and 24 h, a mounting fluorescent signal was detected, and quantified results showed that liver was the primary site of NS accumulations and spleen and kidney, similar to previous studies. In tumors with a remarkable cumulative depositions based upon time-dependent manner , a significant NSs mass was found which suggested the pronounced tumor-dependent tendency of NSs accumulations. The BPs-based nanoagents can be obtained primarily because of the EPR effect for highly effective tumor tissue accumulations. The abnormal vascular framework of tumor tissues allows for nanoparticles to enter the tumor than for other tissue. It is understood that nanoparticles of 10–200 nm have an excellent tendency to the influence of EPR effect on tumors. The EPR effect for treating cancer is used to localize our synthesized BPs nanoagents with a preferred size (∼130 nm) within the tumor. As 4T1-luc cells had an exotic luciferase reporting gene, a tumor cell could be lighted by i.p. d -luciferin administration and an IVIS System of the Spectrum Imaging bio-luminescence examination. As displayed in , the fluorescent of BPs/Cy5.5@PLGA in primary tumors was overlapped with luciferase bio-luminescence and indicates an extremely selective target of these NSs on the primary tumors. In particular, as shown in , we have been investigating luciferase bio-luminescence and BP/Cy5.5@PLGA fluorescence in lung metastatic tumors. The lungs have been shown to overlap these two signals to enhance the robust capacity to penetrate metastatic tumors in the lung (Lee et al., ). The above data thereby show that the BPs@PLGA gives a better benefit for combination chemophotothermal (CPT) treatments provided by the hybrid BP/G-Rg3@PLGA, targeting both primary and metastatic tumors. Based on the composite size and structure (a more efficient tumor accumulations structure and size based on current studies), we hypothesis that the PLGA microencapsulation largely contributes to improving our nanodrugs tumor targetability, given its high EPR effects. Both these contain an improved specific target of the metastasis tumors in the lungs along with our BPs@PLGA.
BPs/G-Rg3@PLGA suppressed lung metastasis in different representations Xenograft tumors, embedded in the backs of mice/orthotopic designs and focused on primary tumors, are often used in cancer nanomedicine studies, including for metastasis (Cossío et al., ). Numerous prior studies, started eradicating primary tumors to prevent metastasis. More suitable tumor models are needed to improve imitate the accurate biological mechanisms and actions of cancer in lung metastasis. Thus, we evaluated BPs@PLGA antimetastasis potency using different designs of mouse-based lung cancer: the orthotopic model and the 4T1-luc injection model developed in previous studies. First, there was the use of the orthotopic models is an excellent models to achieve the efficiency of chemotherapeutic drug to prevent lung metastasis from the primary tumors of breast cancer. As shown in , G-Rg3 was not significantly inhibited from primary growth of primary tumors with 10 mg/kg of mice weights, much lesser than that utilized in earlier diagnosis and treatment compared with untreated control. Alternatively, BP/G-Rg3@PLGA, which had the G-Rg3 weight, was about 20% less primary tumor growths without NIR radiations compared to untreated control, as indicated by the tumor growth and the tumor weight ( and ). The tumor growth curves reflected the untreated regulation. Combinatory photothermal therapy (PTT) effectively improved, as shown in significantly impaired tumor growth ( and ), the efficacy of tumor kills at BP/G-Rg3@PLGA. Three primary tumors, indicating robust CPT potency, have been completely abated in the BPs/G-Rg3@PLGA + NIR group, . Furthermore, the tumor tissue group of BPs/G-Rg3@PLGA + NIR has been more notably destroyed reflected in the dramatically dying tumor cells and damaged H&E staining . Many of treatments are restricted by the metastasis of the 4T1-luc lung . In the mice that received G-Rg3 and G-Rg3 + NIR ( , for G-Rg3 + NIR), tumor nodule amounts were shown in the lungs to about 39% and 57%, respectively, in untreated group control. In comparison, the tumor nodules in mice treatment with BPs/G-Rg3@PLGA with or without the NIR compared to untreated control mice were decreased by 64% and 89%, respectively , while in mice there had been a significant reduction in tumor nodes with BPs/G-Rg3@PLGA + NIR related to BPs/G-Rg3@PLGA without NIR treatment . In the quantification of bio-luminescence similar variations were identified with the largest decrease in the BP/G-Rg3@PLGA + NIR . No bio-luminescent signal was found on sites other than the lungs which highlighted 4T1-luc strong tendency toward the lungs. Another lung metastasis mouse model was developed, in which four T1-luc cells were injected in the tail vein by mouse, to promote the inhibitory action on metastasis and to eliminate the probability of the inhibitory impact on lung metastasis, caused by the removal of primary tumors (Fang et al., ). In G-Rg3, G-Rg3 + NIR and control group over the time period, fast growth in bio-luminescent intensity has been shown, reflecting the fast localizations of 4T1-luc cells for metastasis lung specimens. In BP/G-Rg3@PLGA, approximately 59% and 96% of the rise in bio-luminescence intensity was reduced in comparison to untreated control by the mice without or with NIR . Notable factors in the BPs/G-Rg3@PLGA + NIR are the enhancement of the bio-luminescent intensity in mice, and only a small, bio-luminescent signal is eliminated from the BPs/G-Rg3@PLGA + NIR combinatory therapeutic inhibition compared with that of other groups . In addition, these major variations have been supported by the direct monitoring of tumor nodule numbers . The survival of such mice has been drastically reduced, particularly in untreated mice, as a result of the successive metastatic tumor formations, apart from the BPs/G-Rg3@PLGA + NIR administration which has shown considerable survival progress. In addition, the corresponding H&E stain has confirmed the drastic reduction in lung metastatic lesions from mice in comparison with other groups, BPs/G-Rg3@PLGA + NIR administered . Significant cellular havoc occurs in metastatic lung tumors from the BPs/G-Rg3@PLGA + NIR-treated mouse, similar to changes in primary tumors, indicate damaged cells by yellow arrow heads in which reveals the considerable toxicities of tumor cells in responses to combined chemothermal therapy. Unlike tumor tissues, different organs like the kidney, spleen, liver and heart in mice have not received obvious morphological lesions following various administrations, as evidenced by H&E staining methods . In addition, in blood cell counting and biochemical blood samples, no irregular parameters were identified . These findings show that BP/G-RG 3@PLGA nanocomposites are extremely biocompatible and combination therapeutics are biologically safe. Combining the nanosystems preferential features – outstanding tropism for primary tumors and metastatic tumors and remarkable photothermal results and PTT chemo-toxicity triggers – our results have shown that BPs/G-Rg3@PLGA + NIR therapies have shown excessive potential in limiting breast to lung cancer metastasis.
BPs/G-Rg3@PLGA induced marked cell death of 4T1-luc cells We examined in vitro the cytotoxic effects of BPs/G-Rg3@PLGA in order to confirm mechanisms for the increased tumor apoptosis in metastatic tumors and primary in responses to BPs/G-Rg3@PLGA + NIR. BPs@PLGA did not display substantial proliferation in 4T1-luc cells in itself at high concentrations, as the CCK-8 analysis. Subsequently, a CLSM was accomplished to determine the intracellular use of substance in 4T1-luc cells with Cy5.5-labeled BPs@PLGA. reveals that, in comparison with no fluorescent in free Cy5.5 sample-treatment cells BPs@PLGA confirms an extreme red fluorescent in the cytoplasm as a suitable delivery drug for nanodrugs. In addition, 4T1-luc cells were treated in a series of similar treatments in vivo . As seen in , G-Rg3 has caused slight cell death with or without NIR comparison with untreated cells at a concentration of 50 μg/mL. The concentration of G-Rg3 is much lower than in prior findings, suggesting a low proliferation of G-Rg3 itself toward 4T1-luc cells. It should be noted that this concentration is lesser. However, as seen in approximately 64-fold accumulation for PI-positive cells , and 63% reduction in the cell proliferation , a substantial inhibition in BP/G-Rg3@PLGA-treated 4T1-luc cells was observed for drug delivery platform. Through, in cells that responded in a greater way to BP/G-Rg3@PLGA + NIR, with a higher increase than 100-fold in PI-positive cells and >85% decreased in cell proliferation in accordance with the in vivo results mentioned above were more drastic cell growth inhibitions . These in vitro findings have revealed the multimodal mechanisms induced in proliferation by BPs/G-Rg3@PLGA nanocomposites: the nanocomposite-dependent 'Trojan horse' framework for the delivery of chemotherapy agents, along with combinational CPT toxicity, to cause cell apoptosis.
Conclusions To conclude, we have established a multi-functional nanodrug based on BPs/G-Rg3@PLGA that has a higher tendency to target either primary or lung metastatic tumors effectively and a prominente chemothermal characteristic to control the growth of the tumor in a metastatic direction. BPs/G-Rg3@PLGA reveals that tropism targets both lung metastatic tumors and primary tumors, which are finely tailored to suppress metastatic tumors in the lung through raising the temperatures in laser NIR radiations. Additionally, mechanistic studies interpreted the expedition releases of G-Rg3 from the nanoagents that synergistically causes apoptosis-dependent cell death by heat and thermal therapy. Our nanocomplex also offers remarkable biocompatibility to various organs/tissues. In conjunction with this study, a novel strategies of MTN is introduced for the successful therapy of lung cancer metastases.
|
Vertical differences in carbon metabolic diversity and dominant flora of soil bacterial communities in farmlands | ad33607f-ee41-40d2-9594-2f9b4e70216d | 11043072 | Microbiology[mh] | Land serves as an indispensable material foundation and resource carrier for human survival and social development and provides important ecological and economic benefits . Soil organic carbon is one of the main components of soil, and it is one of the important factors for characterizing the soil quality and maintaining the productivity of terrestrial ecosystems . The carbon cycle is an important part of the biogeochemical cycle. The spatial heterogeneity in soil, development time, climatic zone, vegetation, hydrology and other comprehensive environmental factors directly or indirectly affect the carbon cycle in soil . Carbon and nitrogen metabolism is a necessary physiological activity for the survival and growth of various organisms. In farmland ecosystems, soil organic carbon is mainly derived from crop litter, tillage measures and crop root exudates . The conversion of plants into soil organic carbon, through decomposition and eventual stabilization, is mainly achieved by microorganisms. Under the action of soil microorganisms, most of the organic matter is decomposed into CO 2 and released back into the atmosphere. A small amount of organic matter is difficult to decompose and eventually persists in the soil in the form of stable organic matter (mainly humus). A high carbon-to-nitrogen ratio (C/N) may be more beneficial for SOC sequestration , but litter input generally favours soil nitrogen retention, reduces soil C/N, and increases microbial activity , which is a major factor in multiple SOC changes due to increased nitrogen. The diverse conditions found within soil particles encourage the growth of different types and amounts of soil microorganisms, and changes in these factors can impact the soil quality , . Microorganisms can adapt to different growth environments by finely regulating the carbon and nitrogen metabolism balance , . The spread of soil microorganisms can be influenced by environmental conditions. For example, studies have indicated that the pH, soil water conditions, and organic matter dissolution are the key determinants influencing microbial biogeographic patterns , while soil nutrients such as total nitrogen and total phosphorus exert considerable impacts on the soil microbial abundance . Bolan et al. reported that most dissolved organic carbon is consumed during soil microbial activities and can provide suspended organic carbon particles for adsorption during soil aggregate formation, which plays a major role in the initial aggregate composition process. Soil microorganisms constitute the driving force of material and energy cycling in soil, and their survival and reproduction are mainly provided by organic carbon in soil . Soil microorganisms participate in the formation and decomposition of humus in soil and the transformation and cycling of soil nutrients, and they promote the energy flow and material cycle in terrestrial ecosystems – . They are important indicators for evaluating agricultural practices , are often regarded as sensors of agricultural ecosystems, and fulfil an important role in ecosystem functions through interactions with the surrounding environment . Numerous researchers have investigated bacterial communities in surface and subsurface soils, and while considerable differences in the bacterial community structure between the two types of soils have been reported , , the soil microbial communities in both soils affect the soil material cycle and physiological ecology of plants . However, most studies on soil bacterial carbon metabolism have focused on near-surface soil (0 ~ 15 cm) , , with little data on the vertical functional organization of soil bacterial populations. Therefore, variations in the carbon utilization capacity and organization of microbial communities, as well as the contribution of the deep soil layer to the soil carbon cycle, may be represented by the ability of soil bacterial communities to metabolize carbon in the vertical dimension. Therefore, we collected soil profiles from typical tobacco–rice multiple cropping areas (Changsha, Hengyang, and Chenzhou) in Hunan Province, studied the relationships between soil bacterial carbon source metabolic activity and different soil chemical and physical properties at various depths through the Biolog-ECO test method and 16S sequencing analysis, and analysed the influence of environmental factors on metabolic activity. The objective of this study was to define the vertical distribution sequence of the soil bacterial capacity for carbon metabolism and to better understand how variations in local soil characteristics affect the soil carbon metabolism functionality of different soil layers and impact the capacity of soil microbial communities to metabolize carbon, thereby providing a systematic basis for understanding soil quality changes and fostering sustainable utilization of tobacco–rice multiple cropping fields in Hunan.
Location of the research area The research region is located in the southeastern area of Hunan Province, China, which belongs to the subtropical monsoon humid climate zone. The average temperature generally ranges from 16 to 19 °C, the annual rainfall ranges from 1200 to 1700 mm, and the frost-free period is 253–311 days. The soil exhibits a loam texture. Sampling In November 2017 (after the rice harvest), three typical fields with approximately 10 years of tobacco and rice planting were selected in the tobacco–rice multiple cropping areas of Baiyun village (25°46′31′′N, 112°40′30′′E), Renyi town, Guiyang County, Chenzhou city (26°40′17′′N, 112°58′18′′E), Yanzhong village, Mashui town, Leiyang city, Hengyang city (26°40′17′′N, 112°58′18′′E) and Binghe village, Guandu town, Liuyang city, Changsha city (28°20′31′′N, 113°55′34′′E). We utilized a soil column sampler of a specific design, measuring 50 cm in length and 7.5 cm in diameter, to successfully collect undisturbed soil columns, each 50 cm long, at six randomly chosen locations within each representative field. We divided each obtained soil column into five layers at intervals of 10 cm. Subsequently, all the same soil layer samples collected at the six locations within the same field (500 m 2 each) were mixed, ensuring that the sample represented the overall characteristics of the particular soil layer in the field. Next, each combined soil sample was evenly divided into three portions. Consequently, for each typical field and its corresponding soil layers, we obtained three independent soil samples. By this method, we acquired a total of 45 soil samples across all the fields (i.e., 5 soil layers per field * 3 samples per soil layer * 3 fields = 45 soil samples). Determination of the soil physical and chemical properties After the soil samples were dried, the soil physicochemical properties were determined according to a previous protocol : the soil pH was measured using a potentiometric method. Soil organic carbon (SOC) was determined using a total organic carbon analyzer (Vario TOC Cube, Elementar Analysensysteme GmbH, Germany). The total nitrogen (TN) and available nitrogen contents were rapidly determined using an automated Kjeldahl apparatus. After the SOC and total nitrogen data were obtained separately, the carbon-to-nitrogen ratio (CNR) was calculated. The soil moisture content was assessed via oven-drying, the soil bulk density was measured via the core cutter method, and the soil porosity was indirectly derived via density calculations. Determination of the soil capacity for utilizing different carbon sources The characteristics of soil microbial carbon source consumption were examined with the Biolog-ECO method . Representative soil samples were collected from the target locations and transported to the laboratory for natural drying and grinding. Subsequently, an appropriate amount of soil was mixed with sterile water at a ratio of 1:50 (g), and a microbial suspension was prepared by shaking at 200 rpm for 30 min to ensure a uniform distribution of microorganisms. The suspension concentration was adjusted to 0.15 OD at a wavelength of 600 nm to guarantee an appropriate microbial cell density. Thereafter, equal volumes of the microbial suspension were inoculated into each well of a Biolog microplate. The inoculated microplates were placed in a thermostatic incubator maintained at 28 °C, where continuous cultivation occurred for 10 days with predetermined intervals of 24 h between measurements. During cultivation, the microorganisms utilized various carbon sources within the microplate wells, undergoing metabolic activities that produced reduced products that reacted with colour-developing agents, resulting in colour changes. The absorbance values were measured at 590 and 750 nm using an ELx808TM Microplate Reader (brand: Baiteng, United States) at 0, 24, 48, and 240 h with 24-h intervals. The average well colour development (AWCD), Shannon species diversity index (H), and Simpson dominance index (D) were then calculated during the course of the experiment as follows: 1 [12pt]{minimal}
$$AWCD = {(C_{i} - {})} /31$$ A W C D = ∑ ( C i - R ) / 31 where C i denotes the difference between the absorbance values of the i th carbon source hole at 590 and 750 nm, and R denotes the absorbance of the control hole. 2 [12pt]{minimal}
$$Shannon=- {P}_{i} ({P}_{i})$$ S h a n n o n = - ∑ P i × ln P i 3 [12pt]{minimal}
$$Simpson=1- {{P}_{i}}^{2}$$ S i m p s o n = 1 - ∑ P i 2 where P i is the ratio of the relative absorbance of the i th carbon source hole to the sum of the relative absorbance of the whole plate. DNA extraction, PCR amplification, MiSeq sequencing, and data processing DNA was isolated from 45 soil samples using an Omega bacterial DNA kit (Omega Genetics), and Thermo NanoDrop One (Thermo Fisher Scientific) was used to measure the DNA purity and concentration. With the use of genomic DNA as a template, the V4 region of the 16S rRNA gene was amplified by independent PCR using the primer sets 515F (3'-GTGCCAGCMGCCGCGGTAA-5') and 806R (3'-GGACTACHVGGGTWTCTAAT-5'). The resulting amplicon library was subjected to PE6000 sequencing on the Illumina Nova 250 platform . Following initial collection of the sequencing data, we first removed the primers using the Cutadapt program ( https://github.com/marcelm/cutadapt/ ). Then, following the quality control parameters, we acquired paired-end clean reads. Next, OTU clustering was conducted using UPARSE at a 97% similarity criterion, and FLASH was used to combine forward and reverse sequences . Finally, taxonomic classification assignment was performed by the ribosomal database item classifier, and the minimum confidence was set to 50%. To account for different sequencing depths, each sample was resampled to 42,303 sequences, which were then divided into 18,190 OTUs. A publicly available galaxy analysis system ( http://mem.rcees.ac.cn:8080/ ) was used for the analyses . Statistical analysis The data analytics and tabulation program used was Excel 2019, and the mapping software employed was Origin 2018. To identify significant differences between treatments, SPSS 22.0 (SPSS Inc., Chicago, Illinois, USA) was used with analysis of variance (ANOVA) and least significant difference (LSD) approaches. Differences were considered statistically significant at p < 0.05. To measure the difference, Duncan's multiple range approach was applied. SigmaPlot 12.0 (Systat Software, Inc., San Jose, CA, USA) was used to create the figures. The primary discriminant categories in the bacterial population were investigated using linear discriminant analysis (LDA) and linear effect size (LEfSe) approaches during the treatment (LDA score > 3.5, p < 0.05). Spearman's correlation was used to assess the associations between the bacterial populations in the soil samples and other elements.
The research region is located in the southeastern area of Hunan Province, China, which belongs to the subtropical monsoon humid climate zone. The average temperature generally ranges from 16 to 19 °C, the annual rainfall ranges from 1200 to 1700 mm, and the frost-free period is 253–311 days. The soil exhibits a loam texture.
In November 2017 (after the rice harvest), three typical fields with approximately 10 years of tobacco and rice planting were selected in the tobacco–rice multiple cropping areas of Baiyun village (25°46′31′′N, 112°40′30′′E), Renyi town, Guiyang County, Chenzhou city (26°40′17′′N, 112°58′18′′E), Yanzhong village, Mashui town, Leiyang city, Hengyang city (26°40′17′′N, 112°58′18′′E) and Binghe village, Guandu town, Liuyang city, Changsha city (28°20′31′′N, 113°55′34′′E). We utilized a soil column sampler of a specific design, measuring 50 cm in length and 7.5 cm in diameter, to successfully collect undisturbed soil columns, each 50 cm long, at six randomly chosen locations within each representative field. We divided each obtained soil column into five layers at intervals of 10 cm. Subsequently, all the same soil layer samples collected at the six locations within the same field (500 m 2 each) were mixed, ensuring that the sample represented the overall characteristics of the particular soil layer in the field. Next, each combined soil sample was evenly divided into three portions. Consequently, for each typical field and its corresponding soil layers, we obtained three independent soil samples. By this method, we acquired a total of 45 soil samples across all the fields (i.e., 5 soil layers per field * 3 samples per soil layer * 3 fields = 45 soil samples).
After the soil samples were dried, the soil physicochemical properties were determined according to a previous protocol : the soil pH was measured using a potentiometric method. Soil organic carbon (SOC) was determined using a total organic carbon analyzer (Vario TOC Cube, Elementar Analysensysteme GmbH, Germany). The total nitrogen (TN) and available nitrogen contents were rapidly determined using an automated Kjeldahl apparatus. After the SOC and total nitrogen data were obtained separately, the carbon-to-nitrogen ratio (CNR) was calculated. The soil moisture content was assessed via oven-drying, the soil bulk density was measured via the core cutter method, and the soil porosity was indirectly derived via density calculations.
The characteristics of soil microbial carbon source consumption were examined with the Biolog-ECO method . Representative soil samples were collected from the target locations and transported to the laboratory for natural drying and grinding. Subsequently, an appropriate amount of soil was mixed with sterile water at a ratio of 1:50 (g), and a microbial suspension was prepared by shaking at 200 rpm for 30 min to ensure a uniform distribution of microorganisms. The suspension concentration was adjusted to 0.15 OD at a wavelength of 600 nm to guarantee an appropriate microbial cell density. Thereafter, equal volumes of the microbial suspension were inoculated into each well of a Biolog microplate. The inoculated microplates were placed in a thermostatic incubator maintained at 28 °C, where continuous cultivation occurred for 10 days with predetermined intervals of 24 h between measurements. During cultivation, the microorganisms utilized various carbon sources within the microplate wells, undergoing metabolic activities that produced reduced products that reacted with colour-developing agents, resulting in colour changes. The absorbance values were measured at 590 and 750 nm using an ELx808TM Microplate Reader (brand: Baiteng, United States) at 0, 24, 48, and 240 h with 24-h intervals. The average well colour development (AWCD), Shannon species diversity index (H), and Simpson dominance index (D) were then calculated during the course of the experiment as follows: 1 [12pt]{minimal}
$$AWCD = {(C_{i} - {})} /31$$ A W C D = ∑ ( C i - R ) / 31 where C i denotes the difference between the absorbance values of the i th carbon source hole at 590 and 750 nm, and R denotes the absorbance of the control hole. 2 [12pt]{minimal}
$$Shannon=- {P}_{i} ({P}_{i})$$ S h a n n o n = - ∑ P i × ln P i 3 [12pt]{minimal}
$$Simpson=1- {{P}_{i}}^{2}$$ S i m p s o n = 1 - ∑ P i 2 where P i is the ratio of the relative absorbance of the i th carbon source hole to the sum of the relative absorbance of the whole plate.
DNA was isolated from 45 soil samples using an Omega bacterial DNA kit (Omega Genetics), and Thermo NanoDrop One (Thermo Fisher Scientific) was used to measure the DNA purity and concentration. With the use of genomic DNA as a template, the V4 region of the 16S rRNA gene was amplified by independent PCR using the primer sets 515F (3'-GTGCCAGCMGCCGCGGTAA-5') and 806R (3'-GGACTACHVGGGTWTCTAAT-5'). The resulting amplicon library was subjected to PE6000 sequencing on the Illumina Nova 250 platform . Following initial collection of the sequencing data, we first removed the primers using the Cutadapt program ( https://github.com/marcelm/cutadapt/ ). Then, following the quality control parameters, we acquired paired-end clean reads. Next, OTU clustering was conducted using UPARSE at a 97% similarity criterion, and FLASH was used to combine forward and reverse sequences . Finally, taxonomic classification assignment was performed by the ribosomal database item classifier, and the minimum confidence was set to 50%. To account for different sequencing depths, each sample was resampled to 42,303 sequences, which were then divided into 18,190 OTUs. A publicly available galaxy analysis system ( http://mem.rcees.ac.cn:8080/ ) was used for the analyses .
The data analytics and tabulation program used was Excel 2019, and the mapping software employed was Origin 2018. To identify significant differences between treatments, SPSS 22.0 (SPSS Inc., Chicago, Illinois, USA) was used with analysis of variance (ANOVA) and least significant difference (LSD) approaches. Differences were considered statistically significant at p < 0.05. To measure the difference, Duncan's multiple range approach was applied. SigmaPlot 12.0 (Systat Software, Inc., San Jose, CA, USA) was used to create the figures. The primary discriminant categories in the bacterial population were investigated using linear discriminant analysis (LDA) and linear effect size (LEfSe) approaches during the treatment (LDA score > 3.5, p < 0.05). Spearman's correlation was used to assess the associations between the bacterial populations in the soil samples and other elements.
Ability of the bacterial communities in the different soil layers to utilize carbon sources AWCD may be used to determine the capacity of a soil microbial community to metabolize different carbon sources. Figure shows the overall performance of the AWCD value of soil bacterial carbon source consumption in the different Hunan tobacco–rice multiple cropping fields. The AWCD value decreased with increasing soil depth, indicating that bacteria in the topsoil layer could better utilize carbon sources than those in the deep soil layers. Notably, the carbon metabolism capacity of the soil bacterial community in Chenzhou (Fig. A) sharply decreased in layer T2 (10 ~ 20 cm soil layer). However, in Hengyang (Fig. B) and Changsha (Fig. C), the soil bacterial community potential for carbon metabolism decreased to an extremely poor level in layer T4 (30 ~ 40 cm soil layer). Analysis of the carbon metabolism functional diversity of the bacterial communities in the different soil layers In line with the shifting trend of the AWCD value curve, we selected an AWCD value of 240 h to investigate the composition of the soil bacterial population. The outcomes are listed in Table . With increasing soil layer depth, the Shannon and Simpson index values at the three locations decreased. There was a significant difference in the Shannon and Simpson index values between the topsoil (0 ~ 20 cm) and subsoil (20 ~ 50 cm) layers in Chenzhou and Hengyang, while the Shannon and Simpson index values in Changsha decreased with soil depth. Moreover, there was no significant difference between the different soil layers. Intensity of the use of various carbon sources by the bacterial populations in the different soil layers The 31 carbon sources on the Biolog ECO microplate can be categorized into six types, namely, carbohydrates (10), amino acids (6), carboxylic acids (7), polymers (4), amines (2) and phenolic acids (2). The ability of soil microorganisms cultured for 240 h to utilize the different types of carbon sources was analysed, and the results are shown in Fig. . The ability of bacteria to utilize the six distinct carbon source types across the multiple soil layers generally decreased with soil layer depth. In the Chenzhou area (Fig. A), compared to the other soil layers, T1 exhibited a much greater level of consumption of carbohydrates, amino acids, carboxylic acids, and phenolic acids. Greater utilization of polymers was observed in T1 than in T3, T4, or T5, and significantly higher polymer utilization was observed in T2 than in T3 and T5. There were no notable differences in the capacity of bacteria to use amines among the five soil layers. In the Hengyang area (Fig. B), compared to layers T3, T4, and T5, layer T1 attained a much greater capacity for bacterial glucose utilization. The bacteria in the T1 and T2 soil layers used amino acids to a far greater extent than those in the remaining three soil layers. The ability to use phenolic acids was significantly higher in the T1 layer than in the other soil layers. The ability of bacteria to use carboxylic acids and polymers was significantly greater in the T1 and T2 layers than in the other soil layers, and layer T1 exhibited a much higher capacity for amine utilization. The T1 layer in Changsha (Fig. C) exhibited a significantly greater capacity than the other layers for the utilization of amino acids, carboxylic acids, and carbohydrates. Layer T1 also indicated a substantially higher capacity for polymer consumption than layers T3, T4, and T5. Compared to the other soil layers, T1 exhibited a much higher capacity for phenolic acid use. Principal component analysis of bacterial carbon source metabolism in the different soil layers Figure shows that PC1 and PC2 could explain 49.1% and 9.3% of the total variance, respectively, and that the combined contribution to the variance reached 58.4%. While the separation between CZ, HY and CS was not obvious on the PC1 and PC2 axes, there was a clear distinction between the various soil strata. All the soil layer T1 samples and the majority of the layer T2 samples had positive PC1 values. Soil layer T3 showed the opposite pattern to that of layer T2, while layers T4 and T5 exhibited entirely negative PC1 values. The five soil layers were spread along the positive and negative axes of PC2, with the deeper soil layers indicating a tendency to cluster closer to the zero value of the PC2 axis. As a result, it was clear that while there were differences between the three farming regions of Chenzhou, Hengyang, and Changsha in terms of the capacity of the soil bacterial communities to utilize carbon sources, these differences were not as significant as the differences between the various soil layers. In principal component analysis, the load values of the 31 carbon sources (Table ) could reflect the closeness of the relationships between these carbon sources and the principal component. The higher the absolute value of the load value was, the closer the relationship between the carbon source and the principal component. There were 20 carbon sources with an absolute load factor of the main component PC1 greater than 0.6. These sources included five carbohydrates, four amino acids, five carboxylic acids, four polymers, and two phenolic acids. There were 10 carbon sources with absolute values above 0.8, including two carbohydrates (D-xylose and D-mannitol), three amino acids (L-asparagine, L-phenylalanine and L-serine), 2 carboxylic acids (γ-hydroxybutyric acid and methyl pyruvate), two polymers (Tween 40 and Tween 80), and one phenolic acid (4-hydroxybenzoic acid). Only phenylethylamine exhibited an absolute load value of the primary component PC2 greater than 0.6, but there were no carbon sources with a load value greater than 0.8. A thorough investigation revealed that the vertical characteristics of the carbon metabolic functions of the soil bacterial populations in the tobacco–rice multiple cropping fields in Hunan were closely related to carbohydrates, amino acids, carboxylic acids, and polymers. Differential vertical distributions of the dominant bacterial groups among the soil layers From the phylum level to the genus level, effect size analysis using linear discriminant analysis was applied, and the bacterial community groups exhibiting differences in each soil layer of the tobacco–rice multiple cropping fields were compared. The results (Fig. a, Table ) showed that 70 taxa demonstrated significant differences between the five soil layers. There were considerably more bacteria exhibiting differential distributions in layer T1 than in the other soil layers for six phyla (Acidobacteriota, 14.95%; Armatimonadota, 0.99%; Bacteroidota, 10.22%; Chloroflexi, 30.79%; Myxococcota, 1.65%; and Planctomycetota, 3.01%) and five genera ( Bryobacter, 0.83%; RB41 , 0.89%; Flavisolibacter , 1.34%; Anaerolinea , 0.81%; and UTCFX1 , 13.56%). The relative abundances of four phyla (Gemmatimonadota, 2.61%; Latescibacterota, 1.49%; NB1_j, 0.68%; and RCP2_54, 1.01%) were considerably higher in layer T2 than in the other layers. Compared to the other soil layers, T3 exhibited considerably higher relative abundances of microorganisms in one phylum (Sva0485, 6.16%) and two genera ( Trichlorobacter , 1.05%; and mle1_7 , 0.75%). Two phyla (MBNT15, 2.83%; and Patescibacteria, 4.77%) and one genus ( MM2 , 0.94%) attained considerably greater relative abundances in the T4 soil layer than in the other soil layers. Compared with the other soil layers, the T5 soil layer indicated a significantly higher relative abundance of one phylum (Proteobacteria, 37.63%) and four genera ( Arthrobacter , 1.81%; Pedobacter , 0.70%; Massilia , 6.22%; and Lysobacter , 22.64%). Overall, the distinctive bacterial communities encountered in each soil layer may be one of the elements causing variations in soil microbial carbon metabolism. The relationships between the distinct bacterial groups in each soil stratum and their carbon source use levels are shown in Fig. b and c. At the phylum level, the relative abundance in the topsoil layer (0–20 cm) was notably greater than that in the subsoil layers. In addition to amines, the degree of metabolism of carbon sources was significantly positively related to Armatimonadota, Chloroflexi, Myxococcota, Planctomycetota, Gemmatimonadota, Latescibacterota, and RCP2_54. The relationship between Acidobacteriota and carboxylic acids was statistically significant, as was the relationship between NB1_j and carbohydrates. There was a substantial negative correlation between five carbon sources, with amines as the exception, and Patescibacteria, and the relative abundance of this group in the subsoil layers was notably greater than that in the topsoil layer. Proteobacteria exhibited significant negative correlations of different degrees with the degree of metabolism of the six carbon sources. At the genus level, there was a significant difference between the relative abundance levels of Bryobacter , RB41 , Flavisolibacter , Anaerolinea , and UTCFX1 in the topsoil and subsoil layers, and there was a significant positive association between the metabolic degree of five carbon types, with the exception of amines. There was a substantial negative correlation between the metabolism of five carbon types, with the exception of amines, and the relative abundances of Trichlorobacter , Arthrobacter , Pedobacter , Massilia , and Lysobacter in the subsoil layers, which were significantly greater than those in the topsoil layer. Soil factors driving carbon metabolism of the bacterial populations in the different soil layers Table provides the test results for the physical and chemical characteristics of the soil layers in the different agricultural areas. Data on the level of carbon source exploitation and the physical and chemical compositions of the soil samples were subjected to redundancy analysis. Table and Fig. show the correlation between the amount of carbon actively metabolized by the soil bacterial populations and the physical and chemical characteristics of the different soil layers. The two axes of RDA1 and RDA2 cumulatively accounted for 41.7% of all the variables. The first axis of RDA explained 38.8% of the total variables, the second axis of RDA explained 2.9% of the total variables, and the correlation analysis results were key to the RDA1 axis. The soil layers, from shallow to deep, ranged from positive to negative on the RDA1 axis. Among the soil properties, SP, SMC, SOC, TN, AHN, and CNR were closely associated with the RDA1 axis, while SBD and pH were not. The impacts of the pH and SBD on soil microbial carbon metabolism were generally inversely proportional to the soil depth. The impacts of SP, SMC, SOC, TN, AHN, and CNR on soil microbial carbon metabolism increased with decreasing soil depth.
AWCD may be used to determine the capacity of a soil microbial community to metabolize different carbon sources. Figure shows the overall performance of the AWCD value of soil bacterial carbon source consumption in the different Hunan tobacco–rice multiple cropping fields. The AWCD value decreased with increasing soil depth, indicating that bacteria in the topsoil layer could better utilize carbon sources than those in the deep soil layers. Notably, the carbon metabolism capacity of the soil bacterial community in Chenzhou (Fig. A) sharply decreased in layer T2 (10 ~ 20 cm soil layer). However, in Hengyang (Fig. B) and Changsha (Fig. C), the soil bacterial community potential for carbon metabolism decreased to an extremely poor level in layer T4 (30 ~ 40 cm soil layer).
In line with the shifting trend of the AWCD value curve, we selected an AWCD value of 240 h to investigate the composition of the soil bacterial population. The outcomes are listed in Table . With increasing soil layer depth, the Shannon and Simpson index values at the three locations decreased. There was a significant difference in the Shannon and Simpson index values between the topsoil (0 ~ 20 cm) and subsoil (20 ~ 50 cm) layers in Chenzhou and Hengyang, while the Shannon and Simpson index values in Changsha decreased with soil depth. Moreover, there was no significant difference between the different soil layers.
The 31 carbon sources on the Biolog ECO microplate can be categorized into six types, namely, carbohydrates (10), amino acids (6), carboxylic acids (7), polymers (4), amines (2) and phenolic acids (2). The ability of soil microorganisms cultured for 240 h to utilize the different types of carbon sources was analysed, and the results are shown in Fig. . The ability of bacteria to utilize the six distinct carbon source types across the multiple soil layers generally decreased with soil layer depth. In the Chenzhou area (Fig. A), compared to the other soil layers, T1 exhibited a much greater level of consumption of carbohydrates, amino acids, carboxylic acids, and phenolic acids. Greater utilization of polymers was observed in T1 than in T3, T4, or T5, and significantly higher polymer utilization was observed in T2 than in T3 and T5. There were no notable differences in the capacity of bacteria to use amines among the five soil layers. In the Hengyang area (Fig. B), compared to layers T3, T4, and T5, layer T1 attained a much greater capacity for bacterial glucose utilization. The bacteria in the T1 and T2 soil layers used amino acids to a far greater extent than those in the remaining three soil layers. The ability to use phenolic acids was significantly higher in the T1 layer than in the other soil layers. The ability of bacteria to use carboxylic acids and polymers was significantly greater in the T1 and T2 layers than in the other soil layers, and layer T1 exhibited a much higher capacity for amine utilization. The T1 layer in Changsha (Fig. C) exhibited a significantly greater capacity than the other layers for the utilization of amino acids, carboxylic acids, and carbohydrates. Layer T1 also indicated a substantially higher capacity for polymer consumption than layers T3, T4, and T5. Compared to the other soil layers, T1 exhibited a much higher capacity for phenolic acid use.
Figure shows that PC1 and PC2 could explain 49.1% and 9.3% of the total variance, respectively, and that the combined contribution to the variance reached 58.4%. While the separation between CZ, HY and CS was not obvious on the PC1 and PC2 axes, there was a clear distinction between the various soil strata. All the soil layer T1 samples and the majority of the layer T2 samples had positive PC1 values. Soil layer T3 showed the opposite pattern to that of layer T2, while layers T4 and T5 exhibited entirely negative PC1 values. The five soil layers were spread along the positive and negative axes of PC2, with the deeper soil layers indicating a tendency to cluster closer to the zero value of the PC2 axis. As a result, it was clear that while there were differences between the three farming regions of Chenzhou, Hengyang, and Changsha in terms of the capacity of the soil bacterial communities to utilize carbon sources, these differences were not as significant as the differences between the various soil layers. In principal component analysis, the load values of the 31 carbon sources (Table ) could reflect the closeness of the relationships between these carbon sources and the principal component. The higher the absolute value of the load value was, the closer the relationship between the carbon source and the principal component. There were 20 carbon sources with an absolute load factor of the main component PC1 greater than 0.6. These sources included five carbohydrates, four amino acids, five carboxylic acids, four polymers, and two phenolic acids. There were 10 carbon sources with absolute values above 0.8, including two carbohydrates (D-xylose and D-mannitol), three amino acids (L-asparagine, L-phenylalanine and L-serine), 2 carboxylic acids (γ-hydroxybutyric acid and methyl pyruvate), two polymers (Tween 40 and Tween 80), and one phenolic acid (4-hydroxybenzoic acid). Only phenylethylamine exhibited an absolute load value of the primary component PC2 greater than 0.6, but there were no carbon sources with a load value greater than 0.8. A thorough investigation revealed that the vertical characteristics of the carbon metabolic functions of the soil bacterial populations in the tobacco–rice multiple cropping fields in Hunan were closely related to carbohydrates, amino acids, carboxylic acids, and polymers.
From the phylum level to the genus level, effect size analysis using linear discriminant analysis was applied, and the bacterial community groups exhibiting differences in each soil layer of the tobacco–rice multiple cropping fields were compared. The results (Fig. a, Table ) showed that 70 taxa demonstrated significant differences between the five soil layers. There were considerably more bacteria exhibiting differential distributions in layer T1 than in the other soil layers for six phyla (Acidobacteriota, 14.95%; Armatimonadota, 0.99%; Bacteroidota, 10.22%; Chloroflexi, 30.79%; Myxococcota, 1.65%; and Planctomycetota, 3.01%) and five genera ( Bryobacter, 0.83%; RB41 , 0.89%; Flavisolibacter , 1.34%; Anaerolinea , 0.81%; and UTCFX1 , 13.56%). The relative abundances of four phyla (Gemmatimonadota, 2.61%; Latescibacterota, 1.49%; NB1_j, 0.68%; and RCP2_54, 1.01%) were considerably higher in layer T2 than in the other layers. Compared to the other soil layers, T3 exhibited considerably higher relative abundances of microorganisms in one phylum (Sva0485, 6.16%) and two genera ( Trichlorobacter , 1.05%; and mle1_7 , 0.75%). Two phyla (MBNT15, 2.83%; and Patescibacteria, 4.77%) and one genus ( MM2 , 0.94%) attained considerably greater relative abundances in the T4 soil layer than in the other soil layers. Compared with the other soil layers, the T5 soil layer indicated a significantly higher relative abundance of one phylum (Proteobacteria, 37.63%) and four genera ( Arthrobacter , 1.81%; Pedobacter , 0.70%; Massilia , 6.22%; and Lysobacter , 22.64%). Overall, the distinctive bacterial communities encountered in each soil layer may be one of the elements causing variations in soil microbial carbon metabolism. The relationships between the distinct bacterial groups in each soil stratum and their carbon source use levels are shown in Fig. b and c. At the phylum level, the relative abundance in the topsoil layer (0–20 cm) was notably greater than that in the subsoil layers. In addition to amines, the degree of metabolism of carbon sources was significantly positively related to Armatimonadota, Chloroflexi, Myxococcota, Planctomycetota, Gemmatimonadota, Latescibacterota, and RCP2_54. The relationship between Acidobacteriota and carboxylic acids was statistically significant, as was the relationship between NB1_j and carbohydrates. There was a substantial negative correlation between five carbon sources, with amines as the exception, and Patescibacteria, and the relative abundance of this group in the subsoil layers was notably greater than that in the topsoil layer. Proteobacteria exhibited significant negative correlations of different degrees with the degree of metabolism of the six carbon sources. At the genus level, there was a significant difference between the relative abundance levels of Bryobacter , RB41 , Flavisolibacter , Anaerolinea , and UTCFX1 in the topsoil and subsoil layers, and there was a significant positive association between the metabolic degree of five carbon types, with the exception of amines. There was a substantial negative correlation between the metabolism of five carbon types, with the exception of amines, and the relative abundances of Trichlorobacter , Arthrobacter , Pedobacter , Massilia , and Lysobacter in the subsoil layers, which were significantly greater than those in the topsoil layer.
Table provides the test results for the physical and chemical characteristics of the soil layers in the different agricultural areas. Data on the level of carbon source exploitation and the physical and chemical compositions of the soil samples were subjected to redundancy analysis. Table and Fig. show the correlation between the amount of carbon actively metabolized by the soil bacterial populations and the physical and chemical characteristics of the different soil layers. The two axes of RDA1 and RDA2 cumulatively accounted for 41.7% of all the variables. The first axis of RDA explained 38.8% of the total variables, the second axis of RDA explained 2.9% of the total variables, and the correlation analysis results were key to the RDA1 axis. The soil layers, from shallow to deep, ranged from positive to negative on the RDA1 axis. Among the soil properties, SP, SMC, SOC, TN, AHN, and CNR were closely associated with the RDA1 axis, while SBD and pH were not. The impacts of the pH and SBD on soil microbial carbon metabolism were generally inversely proportional to the soil depth. The impacts of SP, SMC, SOC, TN, AHN, and CNR on soil microbial carbon metabolism increased with decreasing soil depth.
The soil depth may cause changes in many soil environmental factors. For instance, the changes in the quantity of soil nutrients at various depths are indirectly influenced by the accessibility of soil moisture , which consequently influences the diversity and community structure of soil microorganisms . Numerous studies have demonstrated that the community framework and operational structure of soil microorganisms change in relation to the location , climate , soil properties , and tillage technique , . The findings of this study demonstrated that in the Hunan tobacco–rice multiple cropping fields, the ability of soil bacterial communities to metabolize carbon decreased with increasing soil depth and that both the range of species and abundance of the soil bacterial community generally decreased with increasing soil depth. Similar to the findings of Erhunmwunse et al. , the bacterial diversity peaked at 10 cm from the top of the soil profile. Through LEfSe analysis, we found that the bacterial communities with large differences in soil composition in each layer were correlated with the intensity of carbon source metabolism. Notably, the unique bacterial communities in the topsoil layer can boost carbon metabolism, which is favourable for increasing the degradation and transformation of organic carbon, thereby creating favourable material and energy cycles . Specific subsoil bacterial populations may inhibit carbon metabolism in soil, limiting the rate of transformation and degradation of soil organic carbon and thus preventing the accumulation of organic carbon . Our determined physical and chemical characteristics revealed that with increasing soil depth, the soil bulk density, porosity, water content, organic matter content, and total nitrogen content decreased. The findings of a correlation study showed that there was a significant relationship between soil microorganisms and soil physical and chemical characteristics. The microbial community composition changes with soil bulk density. An increased bulk soil density results in decreased soil aeration and porosity, which is generally unfavourable for the development of soil microbial communities . An increase in organic input is beneficial for increasing the microbial diversity in soil . This occurs because soil organic matter itself contains a large amount of carbon and participates in the synthesis of various carbon sources for microbial utilization . Increased microbial diversity may accelerate the turnover rate of nitrogen in soil , which suggests that higher microbial diversity results in the provision of more nutrients for promoting plant growth . Therefore, a reduction in soil physical and chemical qualities results in a decrease in soil bacterial diversity and abundance, as well as a reduction in the capacity to metabolize carbon, all of which affect the living conditions of soil microorganisms in the deeper soil layers . We believe that this may be due to the effects of tillage. In long-term shallow rotary tillage, soil nutrients are exposed at the surface, thereby bringing the bottom layer of fertile soil to the surface , resulting in an uneven soil nutrient distribution and low microbial enrichment in deep soil. In the case of perennial rotary tillage, the depth of the local soil plough layer is shallow, with severe soil compaction, which hinders the development of the crop root system. The roots of the two local crops are mostly concentrated in the 0 ~ 20 cm soil layer , and root exudates and stubble residues provide a stable growth environment and various nutrients for the surrounding soil microorganisms . Finally, a progressive deterioration in the soil quality is caused by the vicious cycle of deteriorating soil physical and chemical qualities and soil bacterial carbon metabolism. To obtain a practical way to enhance the physical and chemical characteristics of deep soil and enhance the capacity of deep soil bacterial communities to metabolize carbon, it is therefore necessary to analyse the changes in the structure of microbial communities by optimizing tillage measures and to thoroughly investigate the coupling mechanism between the community structure and function .
According to our findings, the capacity of bacterial populations to metabolize carbon decreased from the surface soil layer to the deep soil layers, 0–50 cm soil, in tobacco–rice multiple cropping fields in Hunan. The most utilized carbon sources were carbohydrates, amino acids, and polymers. The vertical properties of soil bacterial community carbon metabolism were closely correlated with carbohydrates, amino acids, carboxylic acids, and polymers. The dominant bacterial groups in the topsoil (such as Chloroflexi, Acidobacteriota and Bacteroidota) and subsoil (such as Proteobacteria and Patescibacteria) layers were significantly positively and negatively correlated, respectively, with the carbon metabolism intensity. The key soil environmental parameters that influenced the variations in carbon metabolism of the bacterial communities in the different soil strata were SP, SBD, SOC, AHN, and CNR.
Supplementary Information.
|
ARE QUALITY INDICATORS IMPORTANT IN COLONOSCOPIES? ANALYSIS OF 3,076 EXAMS IN A PRIVATE TERTIARY SERVICE IN SOUTHEASTERN BRAZIL | 347a0f19-37ca-4397-933e-600bb11720cc | 11810112 | Surgical Procedures, Operative[mh] | Colorectal cancer (CRC) is the third most common neoplasm among men and women worldwide . In Brazil, CRC ranks third in cancer-related mortality and second in incidence among males and females . Since its process of carcinogenesis is known, screening for this neoplasm is feasible . Adenomas account for 70% of sporadic CRC cases, while serrated lesions account for 25-30% , . The success of screening programs is demonstrated by the reduction in the incidence of the disease and associated morbidity/mortality as a result of the early identification and treatment of lesions , , . However, Brazil does not have a well-established screening program . The official recommendation is to begin CRC screening in average-risk individuals at age 50 , with colonoscopy being the preferred screening test. In the long term, this method is expected to reduce the incidence of CRC by 31-71% and mortality by 65-88% through the identification and treatment of precursor lesions . Specific quality criteria should be adopted to ensure the effectiveness of colonoscopies, including good colon preparation in more than 90% of tests, a cecal intubation rate=95%, a withdrawal time >6 min, a significant adenoma detection rate (ADR) and sessile serrated polyp detection rate (SSPDR), an adequate resection technique, use of high-resolution imaging, and appropriate surveillance protocols for identified lesions , . The ADR is defined as the percentage of colonoscopies in which at least one adenoma is identified and has been accepted as the primary quality indicator for these tests , . Other metrics such as the polyp detection rate (PDR), advanced adenoma detection rate (AADR), and SSPDR may also be used . This study aimed to evaluate the quality of colonoscopies performed in a private tertiary service in the interior of São Paulo State by calculating ADR, AADR, and PDR and by comparing the results with literature data. This retrospective observational study involved individuals referred for colonoscopy for CRC screening, polyp follow-up, inflammatory bowel disease monitoring, and symptom investigation (abdominal pain, change in bowel habits, rectal bleeding, and anemia). The examinations were conducted at the Colonoscopy Service of Hospital Centro Médico de Campinas, Campinas (SP), from January 2018 to January 2020. Patients between 18 and 85 years were included in the study. Exclusion criteria were missing colonoscopy and histopathological data, inadequate bowel preparation (Boston Scale <6), examinations lasting less than 10 min or performed on an emergency basis, active endoscopic inflammatory bowel disease, cases referred for therapeutic procedures (resection of pre-identified lesions, endoscopic dilation, treatment of surgical complications), prior total colectomy, and incomplete examination, except for cases of stenosing neoplasia. Bowel preparation consisted of administering 500 mL of a 10% mannitol solution or three sachets of sodium picosulfate (Picoprep ® ) combined with a clear liquid diet on the day before the test. Colon preparations were assessed using the Boston Bowel Preparation Scale in examinations conducted after January 2019, when this scale was adopted by the service. All procedures were performed using Olympus CF-Q180AL and CF-H170L video colonoscopies. The following clinical and demographic characteristics of the participants were analyzed: age, sex, colonoscopy indication, total examination time, and complications. Lesions in the cecum, ascending, and transverse colon were classified as proximal; lesions in the descending colon, sigmoid, and rectum were classified as distal. Based on the histopathological findings, polyps were classified as hyperplastic, serrated adenoma, tubular adenoma, villous or tubulovillous adenoma, and adenocarcinoma. The Vienna classification was used to define the degree of dysplasia . Lesions ≥10 mm, in the presence of a villous component or high-grade dysplasia, were defined as advanced adenomas . Pathologists from two laboratories in Campinas (SP) provided the pathology reports according to the examiners’ preferences. To describe the profile of the sample, frequency tables of the categorical variables were created, calculating the mean and standard deviation (SD), and absolute and relative frequency. A test of proportions was used to compare lesion detection rates between the screening group and the group of other indications. A test for trend in proportions was applied to compare lesion detection rates among different age groups. A level of significance of 5% was adopted. The analyses were performed using R 2023 (R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing, Vienna, Austria; https://www.R-project.org/ ). The study was approved by the Research Ethics Committee of the Faculty of Medical Sciences, Universidade Estadual de Campinas (number: 5.084.635 and CAAE: 52244821.9.0000.5404), and was conducted in accordance with the Declaration of Helsinki. Data from 3,686 colonoscopies were collected, and 610 exams were excluded. Inadequate bowel preparation (n=149), incomplete data (n=113), and examinations performed on an emergency basis (n=70) were the main reasons for exclusion. The final sample consisted of 3,076 colonoscopies. There were 53.5% of females, and the mean age was 57.2 years (SD=13.1) . The cecal intubation rate was 97.4%, and the mean total examination time was 13.6 min. Cecal intubation and withdrawal times were recorded for 161 colonoscopies, with mean times of 8.47 and 6.14 min, respectively. The Boston Bowel Preparation Scale was assessed in 952 colonoscopies, and the mean score was 8.9 . Complications were reported in 39 colonoscopies (1.3%) and abdominal pain requiring analgesia was the most frequent (55.8%). Bleeding occurred in six examinations (13.9%). There was one case of intestinal perforation (2.3%) . Complications were defined as those occurring within 30 days of the procedure. All cases of bleeding ceased spontaneously; however, one patient required a revisional colonoscopy with endoclip placement at the polypectomy site. The case of intestinal perforation was treated by laparoscopic rectosigmoidectomy with a satisfactory outcome. A total of 756 adenomas were identified. Tubular adenoma was the most prevalent subtype, observed in 20% of all colonoscopies and in 62.7% of those with positive findings. Additionally, 191 hyperplastic polyps and 61 serrated adenomas (serrated sessile lesions by the current classification) were identified, corresponding to one-quarter of the lesions in positive tests. Additionally, 13 in situ adenocarcinomas and 4 advanced adenocarcinomas were also detected . In total, 203 flat lesions were identified, with a mean size of 13.7 mm (SD=7.62 mm). There were 567 sessile polyps, with a mean size of 5.5 mm (SD=3.33 mm). The mean size of pedunculated polyps was 15.6 mm (SD=7 mm), while semi-pedunculated polyps had a mean size of 11 mm (SD=3.8 mm). Tubular adenoma was the most frequent histological subtype among all morphological types. The highest prevalence of lesions was observed in the sigmoid colon, accounting for 36% of positive tests. The overall PDR was 23% (28% in men and 20% in women). This rate was 5% in individuals younger than 30 years but 26% in those aged 50 years and older. Polyps were detected in 30% of examinations of men aged 50 years. A statistically significant association (p<0.001) was observed between PDR and age groups . The PDR was 27% in the screening group and 10% in the group of other indications, with the difference being statistically significant (p<0.001) . The overall ADR was 20%. When stratified by age, the ADR was 1% in individuals younger than 30 years, 11% in those aged 30-45 years, 15% in those aged 45-50 years, and 23% in individuals over 50 years . A statistically significant association was observed between ADR and age group, with a higher older age group (p<0.001) . When stratified by sex, the ADR was 17% in women and 24% in men. Considering sex and age, the ADR was 20% in women and 27% in men over 50 years . Considering only CRC screening, the ADR was 23% versus 9% for other indications. This difference was also statistically significant (p<0.001) . Adenomas were more frequently detected in the distal segments, descending colon, sigmoid, and rectum, accounting for 33% of all lesions. The mean number of adenomas per colonoscopy, calculated from colonoscopies with one or more adenomas, was 1.22. Advanced adenomas were detected in 3% of the tests and were more frequent in men over 50 years. In this study, no advanced adenomas were found in individuals under 30 years of age. Considering only tests performed for screening purposes, the AADR was 4% . There was also a predominance of these lesions in distal segments. Hyperplastic polyps were observed in 6% of the tests, with a statistically significant difference between examinations performed for screening purposes (7%) and other indications (2%) (p<0.001). A statistically significant association was also found between hyperplastic polyps and age group, with higher rates observed in older age groups (p<0.001). The detection rate of serrated adenomas was 2%, with no significant difference between sexes. No serrated adenomas were detected in individuals under 30 years of age, and there were no significant differences among the various age groups. Malignant neoplasms were detected in 17 tests, with no significant differences between sexes. Malignancies were more common in individuals over 50 years. Colonoscopy is an operator-dependent procedure. Factors that influence lesion detection include bowel preparation, withdrawal time, endoscopist experience, devices that increase mucosal exposure, and imaging technologies , , , , . This study described the pattern of colonoscopies performed in a private tertiary hospital in the interior of the State of São Paulo. The sample consisted of individuals seen at a private service, who were not users of the Unified Health System (Sistema Único de Saúde - SUS), and who were referred by their physicians. The results obtained may reflect the fact that CRC screening programs have not yet been fully established in Brazil. Despite awareness of the need for prevention measures, access to specialists, particularly within SUS, is limited, impairing the correct application of guidelines for the follow-up of detected lesions , . In this study, the cecal intubation rate was 97%, consistent with recommended guidelines . In addition, the complication rate (1.3%) was low in agreement with the main meta-analyses reported in the literature , . However, a limiting factor in the assessment of complications was that only cases of individuals who sought emergency care at the hospital were identified, since these events are reported in the medical. The main complications, such as bleeding and perforation, were associated with therapeutic procedures, in which these rates tend to be higher , . The ADR is the percentage of colonoscopies with at least one identifiable adenoma and is accepted as the primary quality indicator for these tests , . Corley et al. demonstrated a reduction in interval cancer with increasing ADR. The national literature is scarce, and consensus on the ideal Brazilian ADR, a country with a mixed population, continental size, and cultural variability among its different regions, is still needed. Studies conducted at services in the southern and central-western regions of the country reported ADRs that are consistent with the international literature , , , . The overall ADR was 20%. Rates ranging from 5 to 37.5% have been reported in the literature , with recommendations of about 25% for mixed samples of men and women . Possible factors that may have contributed to the rate observed here include the predominance of females (53.5%), the number of individuals under 50 years, and the indication and interval of colonoscopies. A predominance of women has also been observed in other national studies , , , , . Culturally, Brazilian women are more likely to seek prevention programs or be referred for colonoscopy by their gynecologists . Additionally, according to the latest Brazilian Institute of Geography and Statistics (Instituto Brasileiro de Geografia e Estatística - IBGE) census, there is a predominance of women in several regions of the country, whose life expectancy is higher than that of men (79 versus 72 years) . Lower ADRs are expected for women , and the predominance of females in the sample may therefore have contributed to the overall rate found. Another Brazilian study with female predominance reported a lower ADR among women . Male sex is considered an independent risk factor for increased ADR . In the present sample, the ADR was 24% among males, but 17% among females. However, when only screening colonoscopies in individuals=50 years were considered, the ADR was 20% among women and 27% among men, with an overall rate of 23%, values that are within current recommendations . Age is another independent risk factor for ADR, with higher rates being observed in individuals over 50 years. In our study, the increase in ADR with age was statistically significant, consistent with literature data . Following the change in the United States CRC screening guidelines starting at age 45, studies are being conducted to determine the ADR in the 45-49 age group. There is a trend toward a slightly lower ADR in this group than in the group of 50-54 years . Bilal et al. observed an ADR of 28% in the 45-49 age group compared to 38% in the 50-54-year-old group. In our study, the ADR was 15% in the 45-49 age group, but 25% among males, a value slightly lower than that found in men over 50 years of age. Moura et al. also observed an ADR of about 25% in the 45-49 age group. This is an important finding since the recommended starting age of CRC screening in Brazil is still 50 years for the average-risk population. One-quarter of our sample consisted of individuals under 50 years old, a fact that may have contributed to the lower overall ADR found. Shaukat et al. estimated that, if the percentage of screening colonoscopies in younger patients (<50 years) at a service is 10 and 25%, a decrease in ADR of 1% and 3%, respectively, is expected. The indication of colonoscopy is also essential in determining the ADR, which tends to be higher in surveillance colonoscopies than in screening tests , . Identifying the number of index colonoscopies in the sample was not possible, with the overall ADR being 23% in the screening group. Although recent literature suggests that including diagnostic tests in the ADR calculation is insufficient to lower the recommended thresholds, a statistically significant difference in ADR was found between the screening and other indication groups . Adopting international follow-up guidelines is considered a quality criterion for colonoscopies , , , . The inadequate application of these recommendations can lead to unnecessary expenses and additional patient risk . Subsequent colonoscopies in the same individual were not identified for evaluation of routine surveillance procedures due to the service profile, which performs examinations requested by different general practitioners or specialists. Unlike done in the United States, monitoring the excessive use of colonoscopies for average-risk individuals is not common in Brazil . The PDR is easy to obtain and correlates with the ADR, as demonstrated in previous studies , . Another advantage is that its calculation does not require histopathological examination . However, some authors advocate against its use as a quality parameter, arguing that removing nonsignificant polyps like hyperplastic ones in the rectosigmoid can easily skew the results , . In this case series, the overall PDR was 23%, with a rate of 28% among men and 20% among women. There was a statistically significant increase in PDR with increasing age, consistent with other studies , , . On the contrary, the AADR reported in the literature ranges from 4 to 10% . In a cohort of 200,000 colonoscopies, Penz et al. demonstrated a correlation between AADR and ADR, with the former increasing proportionally. Furthermore, the AADR does not vary significantly between highand low-performance endoscopists, with a 25% ADR cutoff. The use of AADR as a quality criterion remains controversial, since lesion size tends to vary between observers . The detection rate of sessile serrated lesions is variable among endoscopists, even among high-performing ones , . There is still a lack of consensus among pathologists on the classification of serrated lesions, even after the 2010 revision . We therefore did not include SSPDR as a quality criterion in the analysis. In our service, specimens are sent to two different pathology laboratories in the city according to the preference of each endoscopist. Both laboratories have used the previous WHO classification for sessile lesions, explaining the term “serrated adenoma” used in this study. It is possible that some of the hyperplastic polyps were in fact serrated lesions. Continuous education and training of professionals are essential for improving examination quality and for maintaining low complication rates. Periodic revision of the results is recommended to improve ADR and AADR. Assessment of the SSPDR should also be encouraged, including efforts to standardize the classification of serrated lesions among pathologists and to improve the evaluation of the proximal segments of the colon . This study has significant limitations, mainly due to its retrospective design; however, it reports the findings of a private colonoscopy service with extensive experience in this procedure. The principal investigator collected all data, which helped reduce potential biases. Prospective studies involving robust case series are needed to obtain more detailed conclusions regarding the ideal ADR, AADR, and SSPDR in Brazil. Colonoscopy proved to be an effective method for detecting polyps and adenomas with a low complication rate. The PDR was higher among men and increased significantly with advancing age. The ADR and AADR were comparable to those reported in the literature. Tubular adenomas predominated in the distal segments of the colon, while adenocarcinomas were not frequent. |
Computational assessment of the functional role of sinoatrial node exit pathways in the human heart | 68f3774f-7d26-4d7a-ab5a-8bc7dce1f35a | 5584965 | Physiology[mh] | The cardiac impulse generated in the sinoatrial node (SAN) propagates to the atrium through SAN-atrial junctions and to the rest of the heart through the cardiac conduction system . Whereas sinus tachy-brady arrhythmia are identified clinical concerns, their mechanistic understanding remains restricted for a spectrum of reasons including a limited understanding of the cardiac anatomy in the SAN region of the heart . Modern computational cardiology is used to quantitatively unveil pathophysiological arrhythmia mechanisms and explore therapies . In contrast to sophisticated human 3D ventricular models, the current generation of SAN models focus on electrophysiological disorders relying on simple SAN anatomies . To address the limitation, two dimensional models of SAN capable of assessing existing conjectures as developed by the Vigmond group are less prevalent. However, some two dimensional models are simplified representations of thin transmural SAN electro-anatomy , a feature that limits their applicability. The electrically heterogeneous SAN has been implemented in the 3D human atrial model by Seemann et al. , but the experimental evidence for SEPs was yet to be generated. More recently, Li et al. developed a detailed 3D anatomical model of the rabbit atrium using high resolution (~ 24 μm) DT-MRI imaging. In their study, Li et al. studied the interaction between the SAN and atrioventricular node. It can be appreciated that the role of SEPs would be secondary to the other heterogeneities that Li et al. have studied. Apart from SEPs, experimental evidence as well as theoretical studies have shown the relevance of atrial strands interdigitating into SAN tissue. Whereas SAN interdigitations are a relevant SAN anatomical feature, this study focus’ on the role of SEPs in SAN function within the immediate anatomical vicinity of the human SAN. Arrhythmia is a manifestation of the complex electrical propagations dictated by anatomy, intercellular gap junction coupling microstructure, and electrophysiology under pathological conditions. Detailed 3D human cardiac anatomy the SAN region has been quantified using histological methods in the Dobrzynski group [ , , ]. The histological evidence strongly suggests the presence of a secondary pacemaker in close vicinity of the SAN . However, the SAN may be electrically coupled to the surrounding atrium at only a few discrete locations called SAN exit pathways (SEPs), something that has eluded conventional imaging of the anatomy. The application of optical mapping methods to study right atrial electrical propagations developed in the Efimov group was crucial in providing evidence for the existence of SEPs. A number of SAN related electrical propagation patterns can be expected to be affected by SEPs. Functional experiments in right atrial preparations have suggested that propagating electrical waves are channelled in and out of the SAN at specific locations, i.e. through the SEPs . Under certain conditions “macro re-entry” has been observed when electrical waves become bound to the SAN’s exterior giving rise to atrial flutter or tachycardia . The biophysical factors that permit macro-re-entry remain under investigated. Re-entry within the SAN has been observed in post myocardial infarction dogs by Glukov et al. , which may also occur in the human SAN. In their study, Glukhov et al. observed a persistent re-entry under the action of isoprenaline. Such intra-SAN re-entry has been termed as “micro re-entry” which was attributed to intra-nodal fibrosis. However, elucidating the nature of the fibrosis may not be feasible experimentally due to the small size of the SAN and inter-individual variability. Prolonged episodes of such “micro re-entry” have been observed when circulating waves persistently exist within the SAN, and give rise to complex SAN-atrial propagations [ , , ]. A slow conduction velocity of experimentally observed values between 3 and 12 cm/s within the SAN may contribute to stability of micro re-entry . Although the role of border-SEPs in sustained micro re-entry is becoming clear using experimental methods, the contribution of important factors such as intercellular coupling dysfunction remain to be explored. Application of pharmacological agents is known to induce a shift in the leading pacemaker site (LPS) and may be affected by SEPs. Another consideration is that an insulating border-SEP configuration can be expected to shield the SAN from atrial tachycardia. These events in the proximity of the SAN are likely to be affected by the anatomy. The presence of a secondary pacemaker may also affect the events within the SAN, as well as participate in physiopathological pacemaking itself. However, additional electro-microstructural alterations may also be a necessary mechanistic component underlying the occurrence of these phenomena, a crucial factor that remains unclear to date. In this computational study, the anatomical SEPs and micro-structural cell-cell coupling representing intercellular gap junctions are linked to micro re-entry, macro re-entry, shift of leading pacemaker site, and degeneration of tachycardia into fibrillation. Although the literature reflects the complex nature of cell-cell coupling heterogeneity or fibrosis , this study aimed at reproducing the observed phenomena using relatively simple constructs and a single re-entry. The experimental data also show the effect of acetylcholine and isoprenaline, both biochemicals secreted by nerve endings within the SAN, on shifting the leading pacemaker site within the SAN. Multiple modelling studies illustrate the electrophysiological basis for the leading pacemaker site (LPS) shift . However, cardiac tissue micro-structure in terms of cell-cell coupling may also be locally affected by similar acting biochemicals as shown in this study. In this study, a de novo 3D electro-anatomical model was constructed and used to examine SAN electro-anatomy in light of recent experimental findings. The electrophysiology was modelled using simple cell models and therefore may be considered to be phenomenological. The main aim of this study was to ascertain if anatomical SEPs can be related to arrhythmias observed in the vicinity of the SAN. Specifically, our goals were to: Implement a functional electro-anatomical model of the human SAN; Identify representative fibrosis conditions that permitted persistent micro re-entry and macro re-entry; Demonstrate the functional role of the paranodal area; and Demonstrate the shift of leading pacemaker shift due to altered SAN micro-structure.
2.1 Model construction The 3D human SAN electro-anatomical model (3D model) is illustrated in . 2.1.1 Baseline electrophysiology Electrically active cell types were implemented using variants of the Fenton-Karma model for human cell types ( ). Cell model equations and parameter values are given in the and respectively. The atrial cell type parameters were adopted from a recent study as well as the original model as follows. The parameter regulating the upstroke current ( J fi ) activation, u c was set to 0.3. The parameter regulating the upstroke velocity ( τ d ) was set to 0.2 to permit atrial propagations in our model. The remaining parameters for the atrial cell type were taken from Podziemski and Zebrowski . To simulate a short action potential, time constants in the slow outward and inward current were modified (see ). To simulate the pacemaker cell types of the SAN and paranodal area, the J fi current and slow inward current, J si , were modified. The parameter u c was set to 0.1, and ( τ d ) was adjusted to 0.05 which permitted the SAN cells or paranodal area cells to activate adjacent atrial cells. In the J si equations, the parameter τ w + was set to 1.5, and the cycle length of the two cell types was then regulated by τ w - . Further, u c si was reduced to 0.01. An electrically inactive fourth cell type was included in the model to represent both the insulating border and fibrosis. For simplicity, the inactive cell type was neither considered as a source nor sink, and its function was to stop propagating electrical waves. It is known that heterogeneous coupled oscillators robustly synchronise as well as sustain function even after deterioration of some individual oscillators, see e.g. . A uniform random distribution of action potentials centred around 850 ms was used to assign to individual SAN cell cycle lengths, which gave a SAN tissue waves at a cycle length of 850 ms . This was achieved by perturbing the parameter τ w - ( ) randomly from 700 to 900. The range of SAN pacing cycle lengths that this parameter distribution gave was in the range of 750 ms to 945 ms. It also permitted a robust localisation of the LPS site. The paranodal area is expected to be overdrive suppressed during physiological SAN pacemaking. It was therefore arbitrarily assigned a cycle length of 1400 ms, i.e. much longer than that of the SAN. The control atrial action potential duration was set to the experimentally observed value of approximately 260 ms . The ranges of SAN action potentials present in the spatially extended SAN, as well as the atrial action potentials are shown in . 2.1.2 Adapted anatomy and anatomical variants The model anatomy ( ) was adapted from our previous histological-immunohistochemistry study . It consists of a uniform structured grid of 128 x 128 x 60 points. The resolution of the anatomy is 0.25 mm (x direction) x 0.25 mm (y direction) x 0.50 mm (z direction). The original paranodal area and atrial region were incorporated unaltered into the model anatomy. The original SAN was smoothed to an ellipsoidal shape ( ) which permitted: a) the implementation of the insulating border-SEP configuration ( ); and b) smooth diffusion gradient within the SAN as shown in (see below). The original and modified SAN anatomies are illustrated in . The insulating border was assumed to be a 1 voxel thick layer of connective tissue on the SAN surface. The insulating border was perforated at four locations to generate SAN-atria electrical junctions, representing the four SEPs identified in the experimental studies ( ) . To permit comparison, each simulation experiment of electrical events was performed on five anatomical variants, each of which omitted certain components of the full model. In each of the following variants when an anatomical region was omitted, it was replaced by atrial tissue type. The variants considered were: SAN only: The paranodal area and border-SEP were omitted. SAN with border-SEPs: The paranodal area was omitted. SAN with paranodal area: This border-SEPs were omitted. Paranodal only: The SAN, border, and SEPs were omitted. Complete model: This configuration included all anatomical components: a SAN surrounded by border-SEPs, as well as the paranodal area column between the SAN and endocardial atrial wall ( ). 2.1.3 Intercellular gap junction coupling micro-structure modelling The model micro-structure incorporates the cell-cell coupling and is implemented as diffusive electrotonic cell-cell coupling. Within the electrically homogeneous atrial part of the model, the diffusion was adjusted as in previous studies to 0.35 mm 2 /ms which gave a conduction velocity of 0.6 m/s in the human atrium . The paranodal area was also considered to be electrically homogeneous with a constant diffusion of 0.035 mm 2 /ms. A lower diffusion in the paranodal area as compared to the surrounding atrial part permitted the paranodal area to act as a pacemaker in the absence of SAN pacemaking or external pacing. In contrast to the atrial and paranodal tissues, the SAN is both micro-structurally and electrically heterogeneous. Multiple simulations were performed to dissect the effects of SAN electrical heterogeneity from that of the anatomy. To permit physiological SAN pacemaking, two factors were found critical. Firstly, a diffusion gradient was found essential to permit initiation of electrical wave propagation close to the centroid of the 3D SAN ( ) . The diffusion gradient may be justified since experimental measurements show gap junction protein distributions within the mammalian SAN where connexin 43 (a gap junction channel protein responsible for conduction velocity) is absent in the centre of the SAN but is present in the periphery . Mathematical modelling studies spanning several decades [ , , ] have incorporated diffusion gradients in the SAN based on experimental findings of SAN conduction heterogeneity, and we adopted a similar approach. To establish the diffusion gradient in our model, smaller ellipsoidal surfaces with the same centroid, ellipticity, as well as long and short axes as that of the SAN were constructed. The smallest surface corresponded to the common centroid, whereas the largest surface was identical to the SAN’s surface. Diffusion values were assigned to the SAN grid points based on the size of the individual ellipsoids. The values ranged from 0.00035 mm 2 /ms for the smallest surface, to 0.35 mm 2 /ms (i.e. atrial diffusion value) for the largest surface. The centroid of the SAN with coordinates (12.75 mm, 12.75 mm, 17 mm) was assigned minimal diffusion. Secondly, electrical heterogeneity ( ) was implemented within the SAN by simulating a uniformly random distribution of pacemaker cycle lengths devoid gradients with a mean of 850 ms. The electrical heterogeneity ensured that consecutive excitations were always initiated at the centroid of the SAN (see ). Unlike the SAN, the paranodal area (as shown in ) had homogeneous electrical and isotropic diffusion properties and it is a column of tissue that vertically spans the 3D model. 2.1.4 Action of pharmacological agents The actions of isoprenaline and acetylcholine were qualitatively adapted from experimental data . In those studies , both isoprenaline and acetylcholine reduced atrial action potential duration. Therefore, a short action potential of 190 ms was implemented to simulate the actions of isoprenaline or acetylcholine in the atrial cell type. Under the action of isoprenaline the mean pacing cycle of the 3D SAN was arbitrarily reduced to 600 ms, whereas under the action of acetylcholine it was increased to 1200 ms. In this manner, the qualitative action of the two biochemicals was captured in the model. The parameter values simulating the electrophysiology are given in the , and the action potentials are shown in . 2.2 Simulation methods Modifications to the basal anatomy were combined with specific electrical initial conditions to perform a number of simulations. All simulations were executed to produce 5 s of electrical activity in the 3D model according to the mono-domain reaction-diffusion partial differential equation: ∂ V / ∂ t = ∇ D ( x , y , z ) ∇ V + I i o n Eq 1 Where V is the membrane potential of cell at location (x , y , z) , D is the spatially dependent diffusion, and I ion is the reaction current produced by cell at that location. At the boundaries and at the interface between active and inactive tissue, no flux boundary conditions were implemented, i.e. D ( x , y , z )∇ V = 0. In case of the SAN and paranodal area pacemaking simulations, the model’s electrophysiology was initialised to resting state for the atrial cell types, or minimum potential in case of the pacemaker (SAN and paranodal area) cell types. The system was then permitted to evolve. In simulations demonstrating the role of paranodal pacemaking, the SAN as well as the border-SEPs were replaced with atrial tissue and 5 s of electrical activity simulated. In the case of re-entry simulations, the electrophysiological initial conditions were produced using the phase distribution method as described in detail previously . The phase distribution method permits the initiation of scroll waves at a chosen location. Accurate estimation of the 3D filament locus was achieved using the phase singularity method was used . The 3D phase singularity detection algorithm is illustrated in . 2.3 Micro re-entry simulation To identify the micro-structural alterations that are necessary to simulate persistent micro re-entry, the 3D model was altered to incorporate the experimentally observed action of isoprenaline, which was a shorter SAN pacing rate. Re-entry was induced within the SAN using the phase distribution method as described above. The action potential alterations, however, alone were insufficient to permit the inducing of persistent micro re-entry in the 3D model. It is known that non-conducting fibrosis regions can alter conduction patterns and assist in preserving re-entrant waves . Arrhythmogenic cardiac fibrosis is now accepted to be diffuse, patchy, or compact . In terms of computational modelling diffuse fibrosis may be thought as individual cell locations becoming inactive (spatial size much less than 1 mm), while patchy fibrosis would be inactive patches covering regions or strands of around 1–5 mm, and compact being a significantly larger region. The experimental literature suggests that the SAN has interstitial fibrosis strands with millimetre sized dimensions . Modelling results by ten Tusscher and Panfilov suggest that persistent re-entry is more likely under patchy fibrosis rather than diffuse fibrosis conditions. Further, the same group have shown that the propensity of re-entry is significantly higher when the size of the spatial heterogeneity is large (several mm) . Studies from the Vigmond group demonstrate that the spatial size of the heterogeneity is related to the global size of the model . The studies from the both groups show that the spatial extent of the fibrosis is of millimetre scale. Whereas there is a growing literature that has quantifies properties of fibrosis in spatially extended systems of excitable cardiac cells, there is a lack of similar studies that address the same questions in systems of coupled pacemaker cells such as the SAN. Importantly, the studies simulating fibrosis as mentioned above distribute the fibrosis patches either randomly or using imaging data. While imaging data for the SAN fibrosis was unavailable to our study, using random distributions in a 3D model posed challenges in terms of computational costs. Firstly, multiple simulations at a given level of randomly distributed fibrosis would be required. This would be combined with estimating the size-proportion of fibrosis relationships in the SAN which is unknown. Importantly, as the size of the SAN-atrial junction representing SEPs is also uncertain, assessment of the interplay between fibrosis size-SEPs size would add to the computational cost. Finally and most significantly, our goal was to demonstrate whether one instance of fibrosis that permitted micro-reentry. Exploration of SAN fibrosis is out of scope of this study as well as the subject of future studies. Therefore, a simple form of fibrosis was hypothesized to reproduce micro-reentry as illustrated in . The hypothesised fibrosis patch within the SAN was modelled as a short elliptical column to provide a quasi-2D narrow conducting band of SAN tissue as a wave propagation pathway. It permitted electrical wave propagation in the X-Z plane within the available SAN tissue ( ), but not in the transmural Y-Z plane. Two cases based on the presence and absence of the border were simulated (see Figs and respectively, for model geometry). 2.4 Macro re-entry simulation Another form of re-entry, i.e. circulating waves on the exterior of the SAN, has been observed in experiments . The circulation termed as macro re-entry was observed when the SAN pacemaking was suppressed using acetylcholine. In accordance with the experimental information, the model atrial tissue’s action potential was reduced, which accommodated a re-entry in the atrial part around the SAN. Although SAN action potential was also altered in the experimental set up , we first simulated macro re-entry without altering SAN electrophysiology. When macro re-entry was initiated around the SAN, electrical waves propagated around the SAN as well as into the available atrial tissue. This hindered the re-entrant wave’s anchoring to the SAN’s exterior. Therefore, we hypothesised that the myocardial infarction may have induced atrial fibrosis. Transmural atrial fibrosis was found to provide a strong micro-structural substrate for the re-entrant wave anchoring. However, we sought the minimal micro-structural alteration that permitted macro re-entry. It was found that inclusion of fibrosis in the atrial region between the SAN surface and the epicardial surface as illustrated in , A may provide the necessary substrate. 2.5 Atrial tachycardia initiation Atrial tachycardia was induced as a transmural scroll wave using the phase distribution method. The initial transmural scroll wave’s filament was placed at a location where SAN or border tissue was not present while traversing from epicardial to endocardial surface ( ). 2.6 LPS shift simulation The location of minimum diffusion (diffusion representing cell-cell coupling) was then shifted to an arbitrarily different location within the SAN, which represented one site of release of the biochemicals from nerve endings. A new diffusion gradient from the new low diffusion location towards the atrium was set up ( , bottom row, first column). 2.7 Numerical methods The operator splitting method was used to generate numerical solutions of in the 3D model. Operator splitting permitted efficient calculation of the effect of the cell model ordinary differential equations (ODEs) part ( ), followed by the PDE part of at each time step. The ODEs were solved using an O(dt 5 ) implicit backward difference formula . The PDE part was first discretised using a second order, O(dx 2 ), Crank-Nicolson finite difference (FD) formulation. The boundary conditions were incorporated after implementing the phase field relaxation method into the implicit solver to ensure accuracy of the numerical scheme. At each time step, the systems matrix of the PDE was preconditioned and the solution obtained recursively . Whereas both the ODE and PDE solvers exploit the advantages of adaptive time stepping, a user defined maximum time step of dt = 0.1 ms was specified to limit errors. Test simulations at dt = 0.05 ms gave virtually the same results as obtained with dt = 0.1 ms. The solver utilizes MPI based geometric box partitioning to parallelize simulation runs. A simulation run of the model generated 5 s of electrical activity using 48 CPUs for 6 hours. Data in the form of spatial distribution of voltage was recorded at each 1 ms interval to permit post-processing, visualization, and data analysis. Similar to the numerical calculations, data I/O in our code is also parallel thus improving run time efficacy. The solvers and algorithm implementations that have been developed for this study in our laboratory are part of the toolbox providing a new computational dimension to complement our experimental research into the cardiac conduction system.
The 3D human SAN electro-anatomical model (3D model) is illustrated in . 2.1.1 Baseline electrophysiology Electrically active cell types were implemented using variants of the Fenton-Karma model for human cell types ( ). Cell model equations and parameter values are given in the and respectively. The atrial cell type parameters were adopted from a recent study as well as the original model as follows. The parameter regulating the upstroke current ( J fi ) activation, u c was set to 0.3. The parameter regulating the upstroke velocity ( τ d ) was set to 0.2 to permit atrial propagations in our model. The remaining parameters for the atrial cell type were taken from Podziemski and Zebrowski . To simulate a short action potential, time constants in the slow outward and inward current were modified (see ). To simulate the pacemaker cell types of the SAN and paranodal area, the J fi current and slow inward current, J si , were modified. The parameter u c was set to 0.1, and ( τ d ) was adjusted to 0.05 which permitted the SAN cells or paranodal area cells to activate adjacent atrial cells. In the J si equations, the parameter τ w + was set to 1.5, and the cycle length of the two cell types was then regulated by τ w - . Further, u c si was reduced to 0.01. An electrically inactive fourth cell type was included in the model to represent both the insulating border and fibrosis. For simplicity, the inactive cell type was neither considered as a source nor sink, and its function was to stop propagating electrical waves. It is known that heterogeneous coupled oscillators robustly synchronise as well as sustain function even after deterioration of some individual oscillators, see e.g. . A uniform random distribution of action potentials centred around 850 ms was used to assign to individual SAN cell cycle lengths, which gave a SAN tissue waves at a cycle length of 850 ms . This was achieved by perturbing the parameter τ w - ( ) randomly from 700 to 900. The range of SAN pacing cycle lengths that this parameter distribution gave was in the range of 750 ms to 945 ms. It also permitted a robust localisation of the LPS site. The paranodal area is expected to be overdrive suppressed during physiological SAN pacemaking. It was therefore arbitrarily assigned a cycle length of 1400 ms, i.e. much longer than that of the SAN. The control atrial action potential duration was set to the experimentally observed value of approximately 260 ms . The ranges of SAN action potentials present in the spatially extended SAN, as well as the atrial action potentials are shown in . 2.1.2 Adapted anatomy and anatomical variants The model anatomy ( ) was adapted from our previous histological-immunohistochemistry study . It consists of a uniform structured grid of 128 x 128 x 60 points. The resolution of the anatomy is 0.25 mm (x direction) x 0.25 mm (y direction) x 0.50 mm (z direction). The original paranodal area and atrial region were incorporated unaltered into the model anatomy. The original SAN was smoothed to an ellipsoidal shape ( ) which permitted: a) the implementation of the insulating border-SEP configuration ( ); and b) smooth diffusion gradient within the SAN as shown in (see below). The original and modified SAN anatomies are illustrated in . The insulating border was assumed to be a 1 voxel thick layer of connective tissue on the SAN surface. The insulating border was perforated at four locations to generate SAN-atria electrical junctions, representing the four SEPs identified in the experimental studies ( ) . To permit comparison, each simulation experiment of electrical events was performed on five anatomical variants, each of which omitted certain components of the full model. In each of the following variants when an anatomical region was omitted, it was replaced by atrial tissue type. The variants considered were: SAN only: The paranodal area and border-SEP were omitted. SAN with border-SEPs: The paranodal area was omitted. SAN with paranodal area: This border-SEPs were omitted. Paranodal only: The SAN, border, and SEPs were omitted. Complete model: This configuration included all anatomical components: a SAN surrounded by border-SEPs, as well as the paranodal area column between the SAN and endocardial atrial wall ( ). 2.1.3 Intercellular gap junction coupling micro-structure modelling The model micro-structure incorporates the cell-cell coupling and is implemented as diffusive electrotonic cell-cell coupling. Within the electrically homogeneous atrial part of the model, the diffusion was adjusted as in previous studies to 0.35 mm 2 /ms which gave a conduction velocity of 0.6 m/s in the human atrium . The paranodal area was also considered to be electrically homogeneous with a constant diffusion of 0.035 mm 2 /ms. A lower diffusion in the paranodal area as compared to the surrounding atrial part permitted the paranodal area to act as a pacemaker in the absence of SAN pacemaking or external pacing. In contrast to the atrial and paranodal tissues, the SAN is both micro-structurally and electrically heterogeneous. Multiple simulations were performed to dissect the effects of SAN electrical heterogeneity from that of the anatomy. To permit physiological SAN pacemaking, two factors were found critical. Firstly, a diffusion gradient was found essential to permit initiation of electrical wave propagation close to the centroid of the 3D SAN ( ) . The diffusion gradient may be justified since experimental measurements show gap junction protein distributions within the mammalian SAN where connexin 43 (a gap junction channel protein responsible for conduction velocity) is absent in the centre of the SAN but is present in the periphery . Mathematical modelling studies spanning several decades [ , , ] have incorporated diffusion gradients in the SAN based on experimental findings of SAN conduction heterogeneity, and we adopted a similar approach. To establish the diffusion gradient in our model, smaller ellipsoidal surfaces with the same centroid, ellipticity, as well as long and short axes as that of the SAN were constructed. The smallest surface corresponded to the common centroid, whereas the largest surface was identical to the SAN’s surface. Diffusion values were assigned to the SAN grid points based on the size of the individual ellipsoids. The values ranged from 0.00035 mm 2 /ms for the smallest surface, to 0.35 mm 2 /ms (i.e. atrial diffusion value) for the largest surface. The centroid of the SAN with coordinates (12.75 mm, 12.75 mm, 17 mm) was assigned minimal diffusion. Secondly, electrical heterogeneity ( ) was implemented within the SAN by simulating a uniformly random distribution of pacemaker cycle lengths devoid gradients with a mean of 850 ms. The electrical heterogeneity ensured that consecutive excitations were always initiated at the centroid of the SAN (see ). Unlike the SAN, the paranodal area (as shown in ) had homogeneous electrical and isotropic diffusion properties and it is a column of tissue that vertically spans the 3D model. 2.1.4 Action of pharmacological agents The actions of isoprenaline and acetylcholine were qualitatively adapted from experimental data . In those studies , both isoprenaline and acetylcholine reduced atrial action potential duration. Therefore, a short action potential of 190 ms was implemented to simulate the actions of isoprenaline or acetylcholine in the atrial cell type. Under the action of isoprenaline the mean pacing cycle of the 3D SAN was arbitrarily reduced to 600 ms, whereas under the action of acetylcholine it was increased to 1200 ms. In this manner, the qualitative action of the two biochemicals was captured in the model. The parameter values simulating the electrophysiology are given in the , and the action potentials are shown in .
Electrically active cell types were implemented using variants of the Fenton-Karma model for human cell types ( ). Cell model equations and parameter values are given in the and respectively. The atrial cell type parameters were adopted from a recent study as well as the original model as follows. The parameter regulating the upstroke current ( J fi ) activation, u c was set to 0.3. The parameter regulating the upstroke velocity ( τ d ) was set to 0.2 to permit atrial propagations in our model. The remaining parameters for the atrial cell type were taken from Podziemski and Zebrowski . To simulate a short action potential, time constants in the slow outward and inward current were modified (see ). To simulate the pacemaker cell types of the SAN and paranodal area, the J fi current and slow inward current, J si , were modified. The parameter u c was set to 0.1, and ( τ d ) was adjusted to 0.05 which permitted the SAN cells or paranodal area cells to activate adjacent atrial cells. In the J si equations, the parameter τ w + was set to 1.5, and the cycle length of the two cell types was then regulated by τ w - . Further, u c si was reduced to 0.01. An electrically inactive fourth cell type was included in the model to represent both the insulating border and fibrosis. For simplicity, the inactive cell type was neither considered as a source nor sink, and its function was to stop propagating electrical waves. It is known that heterogeneous coupled oscillators robustly synchronise as well as sustain function even after deterioration of some individual oscillators, see e.g. . A uniform random distribution of action potentials centred around 850 ms was used to assign to individual SAN cell cycle lengths, which gave a SAN tissue waves at a cycle length of 850 ms . This was achieved by perturbing the parameter τ w - ( ) randomly from 700 to 900. The range of SAN pacing cycle lengths that this parameter distribution gave was in the range of 750 ms to 945 ms. It also permitted a robust localisation of the LPS site. The paranodal area is expected to be overdrive suppressed during physiological SAN pacemaking. It was therefore arbitrarily assigned a cycle length of 1400 ms, i.e. much longer than that of the SAN. The control atrial action potential duration was set to the experimentally observed value of approximately 260 ms . The ranges of SAN action potentials present in the spatially extended SAN, as well as the atrial action potentials are shown in .
The model anatomy ( ) was adapted from our previous histological-immunohistochemistry study . It consists of a uniform structured grid of 128 x 128 x 60 points. The resolution of the anatomy is 0.25 mm (x direction) x 0.25 mm (y direction) x 0.50 mm (z direction). The original paranodal area and atrial region were incorporated unaltered into the model anatomy. The original SAN was smoothed to an ellipsoidal shape ( ) which permitted: a) the implementation of the insulating border-SEP configuration ( ); and b) smooth diffusion gradient within the SAN as shown in (see below). The original and modified SAN anatomies are illustrated in . The insulating border was assumed to be a 1 voxel thick layer of connective tissue on the SAN surface. The insulating border was perforated at four locations to generate SAN-atria electrical junctions, representing the four SEPs identified in the experimental studies ( ) . To permit comparison, each simulation experiment of electrical events was performed on five anatomical variants, each of which omitted certain components of the full model. In each of the following variants when an anatomical region was omitted, it was replaced by atrial tissue type. The variants considered were: SAN only: The paranodal area and border-SEP were omitted. SAN with border-SEPs: The paranodal area was omitted. SAN with paranodal area: This border-SEPs were omitted. Paranodal only: The SAN, border, and SEPs were omitted. Complete model: This configuration included all anatomical components: a SAN surrounded by border-SEPs, as well as the paranodal area column between the SAN and endocardial atrial wall ( ).
The model micro-structure incorporates the cell-cell coupling and is implemented as diffusive electrotonic cell-cell coupling. Within the electrically homogeneous atrial part of the model, the diffusion was adjusted as in previous studies to 0.35 mm 2 /ms which gave a conduction velocity of 0.6 m/s in the human atrium . The paranodal area was also considered to be electrically homogeneous with a constant diffusion of 0.035 mm 2 /ms. A lower diffusion in the paranodal area as compared to the surrounding atrial part permitted the paranodal area to act as a pacemaker in the absence of SAN pacemaking or external pacing. In contrast to the atrial and paranodal tissues, the SAN is both micro-structurally and electrically heterogeneous. Multiple simulations were performed to dissect the effects of SAN electrical heterogeneity from that of the anatomy. To permit physiological SAN pacemaking, two factors were found critical. Firstly, a diffusion gradient was found essential to permit initiation of electrical wave propagation close to the centroid of the 3D SAN ( ) . The diffusion gradient may be justified since experimental measurements show gap junction protein distributions within the mammalian SAN where connexin 43 (a gap junction channel protein responsible for conduction velocity) is absent in the centre of the SAN but is present in the periphery . Mathematical modelling studies spanning several decades [ , , ] have incorporated diffusion gradients in the SAN based on experimental findings of SAN conduction heterogeneity, and we adopted a similar approach. To establish the diffusion gradient in our model, smaller ellipsoidal surfaces with the same centroid, ellipticity, as well as long and short axes as that of the SAN were constructed. The smallest surface corresponded to the common centroid, whereas the largest surface was identical to the SAN’s surface. Diffusion values were assigned to the SAN grid points based on the size of the individual ellipsoids. The values ranged from 0.00035 mm 2 /ms for the smallest surface, to 0.35 mm 2 /ms (i.e. atrial diffusion value) for the largest surface. The centroid of the SAN with coordinates (12.75 mm, 12.75 mm, 17 mm) was assigned minimal diffusion. Secondly, electrical heterogeneity ( ) was implemented within the SAN by simulating a uniformly random distribution of pacemaker cycle lengths devoid gradients with a mean of 850 ms. The electrical heterogeneity ensured that consecutive excitations were always initiated at the centroid of the SAN (see ). Unlike the SAN, the paranodal area (as shown in ) had homogeneous electrical and isotropic diffusion properties and it is a column of tissue that vertically spans the 3D model.
The actions of isoprenaline and acetylcholine were qualitatively adapted from experimental data . In those studies , both isoprenaline and acetylcholine reduced atrial action potential duration. Therefore, a short action potential of 190 ms was implemented to simulate the actions of isoprenaline or acetylcholine in the atrial cell type. Under the action of isoprenaline the mean pacing cycle of the 3D SAN was arbitrarily reduced to 600 ms, whereas under the action of acetylcholine it was increased to 1200 ms. In this manner, the qualitative action of the two biochemicals was captured in the model. The parameter values simulating the electrophysiology are given in the , and the action potentials are shown in .
Modifications to the basal anatomy were combined with specific electrical initial conditions to perform a number of simulations. All simulations were executed to produce 5 s of electrical activity in the 3D model according to the mono-domain reaction-diffusion partial differential equation: ∂ V / ∂ t = ∇ D ( x , y , z ) ∇ V + I i o n Eq 1 Where V is the membrane potential of cell at location (x , y , z) , D is the spatially dependent diffusion, and I ion is the reaction current produced by cell at that location. At the boundaries and at the interface between active and inactive tissue, no flux boundary conditions were implemented, i.e. D ( x , y , z )∇ V = 0. In case of the SAN and paranodal area pacemaking simulations, the model’s electrophysiology was initialised to resting state for the atrial cell types, or minimum potential in case of the pacemaker (SAN and paranodal area) cell types. The system was then permitted to evolve. In simulations demonstrating the role of paranodal pacemaking, the SAN as well as the border-SEPs were replaced with atrial tissue and 5 s of electrical activity simulated. In the case of re-entry simulations, the electrophysiological initial conditions were produced using the phase distribution method as described in detail previously . The phase distribution method permits the initiation of scroll waves at a chosen location. Accurate estimation of the 3D filament locus was achieved using the phase singularity method was used . The 3D phase singularity detection algorithm is illustrated in .
To identify the micro-structural alterations that are necessary to simulate persistent micro re-entry, the 3D model was altered to incorporate the experimentally observed action of isoprenaline, which was a shorter SAN pacing rate. Re-entry was induced within the SAN using the phase distribution method as described above. The action potential alterations, however, alone were insufficient to permit the inducing of persistent micro re-entry in the 3D model. It is known that non-conducting fibrosis regions can alter conduction patterns and assist in preserving re-entrant waves . Arrhythmogenic cardiac fibrosis is now accepted to be diffuse, patchy, or compact . In terms of computational modelling diffuse fibrosis may be thought as individual cell locations becoming inactive (spatial size much less than 1 mm), while patchy fibrosis would be inactive patches covering regions or strands of around 1–5 mm, and compact being a significantly larger region. The experimental literature suggests that the SAN has interstitial fibrosis strands with millimetre sized dimensions . Modelling results by ten Tusscher and Panfilov suggest that persistent re-entry is more likely under patchy fibrosis rather than diffuse fibrosis conditions. Further, the same group have shown that the propensity of re-entry is significantly higher when the size of the spatial heterogeneity is large (several mm) . Studies from the Vigmond group demonstrate that the spatial size of the heterogeneity is related to the global size of the model . The studies from the both groups show that the spatial extent of the fibrosis is of millimetre scale. Whereas there is a growing literature that has quantifies properties of fibrosis in spatially extended systems of excitable cardiac cells, there is a lack of similar studies that address the same questions in systems of coupled pacemaker cells such as the SAN. Importantly, the studies simulating fibrosis as mentioned above distribute the fibrosis patches either randomly or using imaging data. While imaging data for the SAN fibrosis was unavailable to our study, using random distributions in a 3D model posed challenges in terms of computational costs. Firstly, multiple simulations at a given level of randomly distributed fibrosis would be required. This would be combined with estimating the size-proportion of fibrosis relationships in the SAN which is unknown. Importantly, as the size of the SAN-atrial junction representing SEPs is also uncertain, assessment of the interplay between fibrosis size-SEPs size would add to the computational cost. Finally and most significantly, our goal was to demonstrate whether one instance of fibrosis that permitted micro-reentry. Exploration of SAN fibrosis is out of scope of this study as well as the subject of future studies. Therefore, a simple form of fibrosis was hypothesized to reproduce micro-reentry as illustrated in . The hypothesised fibrosis patch within the SAN was modelled as a short elliptical column to provide a quasi-2D narrow conducting band of SAN tissue as a wave propagation pathway. It permitted electrical wave propagation in the X-Z plane within the available SAN tissue ( ), but not in the transmural Y-Z plane. Two cases based on the presence and absence of the border were simulated (see Figs and respectively, for model geometry).
Another form of re-entry, i.e. circulating waves on the exterior of the SAN, has been observed in experiments . The circulation termed as macro re-entry was observed when the SAN pacemaking was suppressed using acetylcholine. In accordance with the experimental information, the model atrial tissue’s action potential was reduced, which accommodated a re-entry in the atrial part around the SAN. Although SAN action potential was also altered in the experimental set up , we first simulated macro re-entry without altering SAN electrophysiology. When macro re-entry was initiated around the SAN, electrical waves propagated around the SAN as well as into the available atrial tissue. This hindered the re-entrant wave’s anchoring to the SAN’s exterior. Therefore, we hypothesised that the myocardial infarction may have induced atrial fibrosis. Transmural atrial fibrosis was found to provide a strong micro-structural substrate for the re-entrant wave anchoring. However, we sought the minimal micro-structural alteration that permitted macro re-entry. It was found that inclusion of fibrosis in the atrial region between the SAN surface and the epicardial surface as illustrated in , A may provide the necessary substrate.
Atrial tachycardia was induced as a transmural scroll wave using the phase distribution method. The initial transmural scroll wave’s filament was placed at a location where SAN or border tissue was not present while traversing from epicardial to endocardial surface ( ).
The location of minimum diffusion (diffusion representing cell-cell coupling) was then shifted to an arbitrarily different location within the SAN, which represented one site of release of the biochemicals from nerve endings. A new diffusion gradient from the new low diffusion location towards the atrium was set up ( , bottom row, first column).
The operator splitting method was used to generate numerical solutions of in the 3D model. Operator splitting permitted efficient calculation of the effect of the cell model ordinary differential equations (ODEs) part ( ), followed by the PDE part of at each time step. The ODEs were solved using an O(dt 5 ) implicit backward difference formula . The PDE part was first discretised using a second order, O(dx 2 ), Crank-Nicolson finite difference (FD) formulation. The boundary conditions were incorporated after implementing the phase field relaxation method into the implicit solver to ensure accuracy of the numerical scheme. At each time step, the systems matrix of the PDE was preconditioned and the solution obtained recursively . Whereas both the ODE and PDE solvers exploit the advantages of adaptive time stepping, a user defined maximum time step of dt = 0.1 ms was specified to limit errors. Test simulations at dt = 0.05 ms gave virtually the same results as obtained with dt = 0.1 ms. The solver utilizes MPI based geometric box partitioning to parallelize simulation runs. A simulation run of the model generated 5 s of electrical activity using 48 CPUs for 6 hours. Data in the form of spatial distribution of voltage was recorded at each 1 ms interval to permit post-processing, visualization, and data analysis. Similar to the numerical calculations, data I/O in our code is also parallel thus improving run time efficacy. The solvers and algorithm implementations that have been developed for this study in our laboratory are part of the toolbox providing a new computational dimension to complement our experimental research into the cardiac conduction system.
3.1 Baseline model SAN behaviour with and without SEPs Activation patterns in the 3D model’s variants within the SAN are illustrated in . The activation time of the SAN was found to be between 4 to 6 ms in the four cases ( ). Leading pacemaker sites (LPS) in all four anatomical variants were located in the vicinity of the SAN’s centroid (12.75 mm, 12.75 mm, 17 mm), as identified by the coordinates of the location within the SAN where electrical propagation was first initiated during any particular heartbeat ( ). The LPS was seen to be marginally affected by the border-SEPs and the paranodal area, as well as the electrical heterogeneity in the SAN. The atrial pacing rate was mainly affected by the insulating border. Relative locations of LPS under the four anatomical configurations considered are illustrated in . In the absence of the border, the atrial tissue was paced at a period of 1048 ms and 999 ms ( respectively). However, in the presence of the border, the atrial tissue experienced a pacing period of 889 ms and 898 ms ( respectively). The synchronous firing of all SAN cells was confirmed as shown in . The APD of individual cells throughout the SAN was measured. It was found that all cells fired synchronously at the pacing rates shown in . 3.2 Paranodal area pacemaking when SAN is inactive The activation sequence with the functional paranodal area is shown in . The LPS was found to be located within the paranodal area ( ). The paranodal area’s LPS was found to be stable over the duration of the simulation, as shown by its location during several beats. The initiated wave's speed was slower in the paranodal area as compared to the surrounding atrial tissue. This is because its diffusion is 10 fold less than the atrial tissue. In contrast to the short SAN activation times (4–6 ms) as observed in , paranodal area activated slowly (15 ms activation time). As individual paranodal area cells have a long cycle length of 1400 ms, the paranodal tissue paced the 3D model at a cycle length of 1565 ms. 3.3 Micro re-entry based on SAN fibrosis, no SEPs within insulating border The anatomical configuration with SAN fibrosis used to simulate micro re-entry is shown in . Unidirectional propagations were initiated in the band of SAN tissue between the SAN fibrosis and atrial tissue. In the case when an insulating border was absent ( ), the induced unidirectional propagation propagated along the fibrosis patch and also into the atrium. Due to the propagation velocity being higher in the atrial part as compared to the SAN, the SAN re-entry dissipated by propagating into the atrial tissue. The mechanism of micro re-entry dissipation is illustrated in (last panel). In the case when an insulating border was incorporated, SEPs were excluded to highlight wave dynamics between the fibrosis patch and the insulating border ( ). The presence of an insulating border prevented atrial excitation, thereby permitting the induced unidirectional excitation to propagate unhindered along the available narrow band of SAN tissue. Due to a balance between the isoprenaline induced short propagation wavelength, and the sufficient diffusion in the narrow conducting band of SAN, the micro re-entry persisted almost periodically throughout the 5 s of simulated activity (period = 705 ms). The observed APD during the micro re-entry was 635 ms, and the conduction velocity in the narrow layer of viable SAN tissue was 0.043 m/s, as compared to 0.4 m/s in the atrial part. This gave a wavelength of approximately 27 mm for the circulating wave in the SAN’s viable tissue. 3.4 Micro re-entry based on SAN fibrosis, full model with SEPs The function of SEPs in micro re-entry was then assessed. The paranodal area played a minimal role in the re-entry dynamics. Therefore simulations were performed in the full model ( ). Depending on where the unidirectional propagation was induced, the SEP toward which the excitation was propagating permitted excitation of atrial tissue. shows an instance where the unidirectional propagation moved towards SEP1, and caused excitation of atrial tissue at SEP1 ( ). This gave rise to propagations in two distinct directions: within the SAN along the narrow SAN band, as well as in the atrial part where the propagation moved significantly faster. The SAN propagation continued to circulate within the SAN. Whenever it was in the vicinity of a SEP, depending on the repolarisation status of adjacent atrial tissue, it initiated atrial propagation. On the other hand, the atrial propagation moved rapidly through the homogeneous atrial part. When an atrial propagation was in the vicinity of a SEP, it initiated propagation into the SAN depending on the SAN tissue’s repolarisation status. Such an event is illustrated in the t = 300 ms and E2 panels. As shown in , the atrial propagation that entered the SAN propagated in each direction possible. In the case when propagations collided, they extinguished each other. The unhindered propagations continued in a re-entrant circuit around the fibrosis patch within the SAN. The present model may be inadequate to simulate re-entry using diffuse fibrosis. Therefore, a large central “lump” of fibrosis within the SAN was implemented to render the re-entry persistent (period = 550 ms). In the simulation result shown in , the re-entry initially propagates in the SAN tissue between SEP2 and SEP1 counter clockwise. As it transverses SEP1, atrial tissue is stimulated to produce an atrial propagation simultaneous to re-entry propagation within the SAN continuing towards SEP4. When the SAN propagations wave front reached SEP4 or SEP3, the atrial tissue was either already depolarised or refractory. As a result, further excitation of atrial tissue by the propagating SAN excitation did not occur. However, the atrial excitation reached the atrial side of SEP2 prior to the SAN re-entry reaching it from within the SAN. This caused the atrial wave to enter the SAN through SEP2 which propagated in both clockwise and counter clockwise directions. The clockwise wave collided with the prior counter clockwise re-entry. The newly induced counter clockwise propagation continued to propagate along the narrow SAN path between the fibrosis and insulating border region. 3.5 Macro re-entry anchors around atrial fibrosis In the first simulation ( ), re-entry was induced in an anatomy omitting the border. In addition, The SAN-atrial electrical heterogeneity was seen to give simultaneous slow-fast propagations in the vicinity of the SAN. As the time frames of show, the combination was sufficient to sustain re-entry that was apparently around the SAN. In this case, the SAN was activated at the same period as the re-entry’s period. The circulation was sustained by the rotor filament ( , mechanism) being almost stationary in the atrial-paranodal area part from the SAN to the endocardial surface. The arms of the re-entry generated propagations around. The period of the SAN excitations was approximately 150 ms, which was similar to that of atrial excitations was 126 ms. Since the period of the macro re-entry was much less than that of the SAN’s intrinsic pacing period (~ 1 s), the SAN was overdrive suppressed by the circulating waves. In the next simulation, the macro re-entry was induced in an anatomy that included the border-SEPs configuration ( ). Due to the presence of the insulating border, the non-conducting region consisting of the atrial fibrosis and insulated SAN was significantly larger which facilitated circulation of the excitation waves around the SAN. When the waves in the atrial tissue were in the vicinity of SEPs, propagations entered the SAN at those SEPs depending on their repolarisation status. Thus, as the macro re-entry progressed, propagating waves were also initiated inside the SAN through some of the SEPs (e.g. , time frames panel for t = 2.35 s). The filament dynamics had a complex pattern ( , Mechanism). The single initiated filament broke into two or more filaments. Each of the filaments gave rise to propagations in the atrial tissue that contributed to the macro re-entry. The period of the SAN excitations was approximately 241 ms, which was significantly slower than that of the re-entry related rapid atrial excitation period of 100 ms. 3.6 Shielding of SAN from external re-entry by SEPs The evolution of re-entry initiated fully in the atrial region was simulated ( ). The rapid atrial tachycardia overdrive suppressed the SAN’s and paranodal area’s inherent electrical activities in all respective cases. When the border-SEP and paranodal area ( , SAN only) were omitted, the arm of the tachycardia periodically initiated excitations in the SAN. Due to the electrical SAN-atrial heterogeneity, these excitations were erratic. The wave propagations within the SAN also broke up producing daughter rotors that contributed to the SAN’s erratic pacing. The dominant frequency map shows that the atrial tachycardia caused SAN periphery to be paced at a much higher rate than the centre of the SAN. The filament of the mother rotor meandered due to the erratic activations. Eventually, there were a significant number of small wavelets showing atrial fibrillation. In contrast, when the border-SEPs were present ( , SAN with border-SEPs panels), the rapid atrial tachycardia did not pace the SAN at such a high rate due to the insulating effect of the border. The SEPs permitted periodic excitations from the atrial tissue into the SAN tissue. The dominant frequency map shows that the SAN was paced at a much lower rate as compared to the case when the border was omitted. The filament was relatively stable, but was also seen to meander to a certain extent. The number of filaments remained low ( ). In the case when the paranodal area was included ( , third and fourth column of panels), the paranodal area caused a large prolongation of the mother rotors filament. The filament extended and often broke down, and multiple daughter filaments were generated ( , third column). The complex paranodal area-SAN anatomical region where the paranodal area prolonged the filaments, and the SAN’s diffusion anisotropy caused breakup led to rapid genesis of atrial fibrillation, as shown by the filament numbers ( ). In the full model ( , column V), the border-SEPs insulated the SAN from the atria tachycardia but the paranodal area promoted filament prolongation and often break up of filament. This is reflected in the modest number of filaments seen in the full model ( ). 3.7 LPS is shifted by biochemically induced micro-structural alterations illustrates that the micro-structure may be a significant factor in LPS shift. In the basal model anatomy, the LPS was found to be at the SAN’s centroid barring any small random functional fluctuations ( , top row). The LPS in the SAN without a border as well as with a border was found to be in close proximity of the hypothetical nerve ending location. In the case when the border-SEP were present ( , bottom row, column C), the locations of the SEPs appear to affect the LPS location as a secondary effect to the altered micro-structure. As the SAN is small (approximately 2 mm long), the shift could be simulated only in a limited region. The relative locations of the shifts caused by various anatomies are shown in . 3.8 Observation of fibrosis in human SAN Experimental evidence of SAN fibrosis is shown in . In comparison to the young heart ( , left) the old heart has significantly more fibrosis. The fibrosis is distributed throughout the SAN. Although the correlation between the level of fibrosis and the anatomical location within the SAN in the old heart’s SAN cannot be conclusively established, the data strongly suggest that fibrosis may disrupt electrical wave propagation. Under suitable distribution of fibrosis, it may be possible to elicit re-entrant tachycardia in such an old heart.
Activation patterns in the 3D model’s variants within the SAN are illustrated in . The activation time of the SAN was found to be between 4 to 6 ms in the four cases ( ). Leading pacemaker sites (LPS) in all four anatomical variants were located in the vicinity of the SAN’s centroid (12.75 mm, 12.75 mm, 17 mm), as identified by the coordinates of the location within the SAN where electrical propagation was first initiated during any particular heartbeat ( ). The LPS was seen to be marginally affected by the border-SEPs and the paranodal area, as well as the electrical heterogeneity in the SAN. The atrial pacing rate was mainly affected by the insulating border. Relative locations of LPS under the four anatomical configurations considered are illustrated in . In the absence of the border, the atrial tissue was paced at a period of 1048 ms and 999 ms ( respectively). However, in the presence of the border, the atrial tissue experienced a pacing period of 889 ms and 898 ms ( respectively). The synchronous firing of all SAN cells was confirmed as shown in . The APD of individual cells throughout the SAN was measured. It was found that all cells fired synchronously at the pacing rates shown in .
The activation sequence with the functional paranodal area is shown in . The LPS was found to be located within the paranodal area ( ). The paranodal area’s LPS was found to be stable over the duration of the simulation, as shown by its location during several beats. The initiated wave's speed was slower in the paranodal area as compared to the surrounding atrial tissue. This is because its diffusion is 10 fold less than the atrial tissue. In contrast to the short SAN activation times (4–6 ms) as observed in , paranodal area activated slowly (15 ms activation time). As individual paranodal area cells have a long cycle length of 1400 ms, the paranodal tissue paced the 3D model at a cycle length of 1565 ms.
The anatomical configuration with SAN fibrosis used to simulate micro re-entry is shown in . Unidirectional propagations were initiated in the band of SAN tissue between the SAN fibrosis and atrial tissue. In the case when an insulating border was absent ( ), the induced unidirectional propagation propagated along the fibrosis patch and also into the atrium. Due to the propagation velocity being higher in the atrial part as compared to the SAN, the SAN re-entry dissipated by propagating into the atrial tissue. The mechanism of micro re-entry dissipation is illustrated in (last panel). In the case when an insulating border was incorporated, SEPs were excluded to highlight wave dynamics between the fibrosis patch and the insulating border ( ). The presence of an insulating border prevented atrial excitation, thereby permitting the induced unidirectional excitation to propagate unhindered along the available narrow band of SAN tissue. Due to a balance between the isoprenaline induced short propagation wavelength, and the sufficient diffusion in the narrow conducting band of SAN, the micro re-entry persisted almost periodically throughout the 5 s of simulated activity (period = 705 ms). The observed APD during the micro re-entry was 635 ms, and the conduction velocity in the narrow layer of viable SAN tissue was 0.043 m/s, as compared to 0.4 m/s in the atrial part. This gave a wavelength of approximately 27 mm for the circulating wave in the SAN’s viable tissue.
The function of SEPs in micro re-entry was then assessed. The paranodal area played a minimal role in the re-entry dynamics. Therefore simulations were performed in the full model ( ). Depending on where the unidirectional propagation was induced, the SEP toward which the excitation was propagating permitted excitation of atrial tissue. shows an instance where the unidirectional propagation moved towards SEP1, and caused excitation of atrial tissue at SEP1 ( ). This gave rise to propagations in two distinct directions: within the SAN along the narrow SAN band, as well as in the atrial part where the propagation moved significantly faster. The SAN propagation continued to circulate within the SAN. Whenever it was in the vicinity of a SEP, depending on the repolarisation status of adjacent atrial tissue, it initiated atrial propagation. On the other hand, the atrial propagation moved rapidly through the homogeneous atrial part. When an atrial propagation was in the vicinity of a SEP, it initiated propagation into the SAN depending on the SAN tissue’s repolarisation status. Such an event is illustrated in the t = 300 ms and E2 panels. As shown in , the atrial propagation that entered the SAN propagated in each direction possible. In the case when propagations collided, they extinguished each other. The unhindered propagations continued in a re-entrant circuit around the fibrosis patch within the SAN. The present model may be inadequate to simulate re-entry using diffuse fibrosis. Therefore, a large central “lump” of fibrosis within the SAN was implemented to render the re-entry persistent (period = 550 ms). In the simulation result shown in , the re-entry initially propagates in the SAN tissue between SEP2 and SEP1 counter clockwise. As it transverses SEP1, atrial tissue is stimulated to produce an atrial propagation simultaneous to re-entry propagation within the SAN continuing towards SEP4. When the SAN propagations wave front reached SEP4 or SEP3, the atrial tissue was either already depolarised or refractory. As a result, further excitation of atrial tissue by the propagating SAN excitation did not occur. However, the atrial excitation reached the atrial side of SEP2 prior to the SAN re-entry reaching it from within the SAN. This caused the atrial wave to enter the SAN through SEP2 which propagated in both clockwise and counter clockwise directions. The clockwise wave collided with the prior counter clockwise re-entry. The newly induced counter clockwise propagation continued to propagate along the narrow SAN path between the fibrosis and insulating border region.
In the first simulation ( ), re-entry was induced in an anatomy omitting the border. In addition, The SAN-atrial electrical heterogeneity was seen to give simultaneous slow-fast propagations in the vicinity of the SAN. As the time frames of show, the combination was sufficient to sustain re-entry that was apparently around the SAN. In this case, the SAN was activated at the same period as the re-entry’s period. The circulation was sustained by the rotor filament ( , mechanism) being almost stationary in the atrial-paranodal area part from the SAN to the endocardial surface. The arms of the re-entry generated propagations around. The period of the SAN excitations was approximately 150 ms, which was similar to that of atrial excitations was 126 ms. Since the period of the macro re-entry was much less than that of the SAN’s intrinsic pacing period (~ 1 s), the SAN was overdrive suppressed by the circulating waves. In the next simulation, the macro re-entry was induced in an anatomy that included the border-SEPs configuration ( ). Due to the presence of the insulating border, the non-conducting region consisting of the atrial fibrosis and insulated SAN was significantly larger which facilitated circulation of the excitation waves around the SAN. When the waves in the atrial tissue were in the vicinity of SEPs, propagations entered the SAN at those SEPs depending on their repolarisation status. Thus, as the macro re-entry progressed, propagating waves were also initiated inside the SAN through some of the SEPs (e.g. , time frames panel for t = 2.35 s). The filament dynamics had a complex pattern ( , Mechanism). The single initiated filament broke into two or more filaments. Each of the filaments gave rise to propagations in the atrial tissue that contributed to the macro re-entry. The period of the SAN excitations was approximately 241 ms, which was significantly slower than that of the re-entry related rapid atrial excitation period of 100 ms.
The evolution of re-entry initiated fully in the atrial region was simulated ( ). The rapid atrial tachycardia overdrive suppressed the SAN’s and paranodal area’s inherent electrical activities in all respective cases. When the border-SEP and paranodal area ( , SAN only) were omitted, the arm of the tachycardia periodically initiated excitations in the SAN. Due to the electrical SAN-atrial heterogeneity, these excitations were erratic. The wave propagations within the SAN also broke up producing daughter rotors that contributed to the SAN’s erratic pacing. The dominant frequency map shows that the atrial tachycardia caused SAN periphery to be paced at a much higher rate than the centre of the SAN. The filament of the mother rotor meandered due to the erratic activations. Eventually, there were a significant number of small wavelets showing atrial fibrillation. In contrast, when the border-SEPs were present ( , SAN with border-SEPs panels), the rapid atrial tachycardia did not pace the SAN at such a high rate due to the insulating effect of the border. The SEPs permitted periodic excitations from the atrial tissue into the SAN tissue. The dominant frequency map shows that the SAN was paced at a much lower rate as compared to the case when the border was omitted. The filament was relatively stable, but was also seen to meander to a certain extent. The number of filaments remained low ( ). In the case when the paranodal area was included ( , third and fourth column of panels), the paranodal area caused a large prolongation of the mother rotors filament. The filament extended and often broke down, and multiple daughter filaments were generated ( , third column). The complex paranodal area-SAN anatomical region where the paranodal area prolonged the filaments, and the SAN’s diffusion anisotropy caused breakup led to rapid genesis of atrial fibrillation, as shown by the filament numbers ( ). In the full model ( , column V), the border-SEPs insulated the SAN from the atria tachycardia but the paranodal area promoted filament prolongation and often break up of filament. This is reflected in the modest number of filaments seen in the full model ( ).
illustrates that the micro-structure may be a significant factor in LPS shift. In the basal model anatomy, the LPS was found to be at the SAN’s centroid barring any small random functional fluctuations ( , top row). The LPS in the SAN without a border as well as with a border was found to be in close proximity of the hypothetical nerve ending location. In the case when the border-SEP were present ( , bottom row, column C), the locations of the SEPs appear to affect the LPS location as a secondary effect to the altered micro-structure. As the SAN is small (approximately 2 mm long), the shift could be simulated only in a limited region. The relative locations of the shifts caused by various anatomies are shown in .
Experimental evidence of SAN fibrosis is shown in . In comparison to the young heart ( , left) the old heart has significantly more fibrosis. The fibrosis is distributed throughout the SAN. Although the correlation between the level of fibrosis and the anatomical location within the SAN in the old heart’s SAN cannot be conclusively established, the data strongly suggest that fibrosis may disrupt electrical wave propagation. Under suitable distribution of fibrosis, it may be possible to elicit re-entrant tachycardia in such an old heart.
The three main developments and findings of this study were: establishment of a 3D human electro-anatomical model that incorporates new anatomical features of SEPs and paranodal area; that we tested the hypothesis that micro and macro re-entry are possible due to fibrosis; and to propose a hypothesis anchored in extant experimental data that microstructural alterations are sufficient for an LPS shift, and do not require electrophysiological modifications. 4.1 De novo 3D human SAN model To the best of our knowledge, we have presented here the first SAN electro-anatomical 3D model of the human. It incorporates known and hypothesised anatomical features and is capable of presenting plausible arrhythmia mechanisms which can be tested in the laboratory and clinic. The model was used to simulate multiple electrical events to identify prime factors anchored in anatomy and micro-structure. 4.2 Baseline model establishment Baseline SAN pacemaking simulation using our 3D model shows that the micro-structure consisting of a SAN cell-cell coupling gradient regulates LPS location. The assumed random electrical heterogeneity conferred robustness to where the SAN waves originated, it was not found necessary to include a further SAN electrical gradient. Whereas the optical mapping experiments point towards conduction pathways originating from the SAN where they form SEPs, the gradients hypothesis requires a revision in light of the new data [ , , ]. 4.3 Simulation of observed micro re-entry In a previous study, micro re-entry has been simulated within a large block of pacemaker tissue in the absence of SAN fibrosis . However, we can appreciate that realistic SAN ( ) is small which may be incapable of containing a rotating scroll wave with a large wavelength. Factors such as intra-SAN fibrosis were found to be necessary to preserve micro re-entry. Indeed, pharmacological agents induce electrophysiological as well as microstructural (i.e. fibrosis) alterations . A large non-conducting region created a pathway for circular propagation in our model. The pathway consists of SAN cells whose excitation depends on a slow diastolic depolarisation. A weaker diffusion as compared to the atrial part combined with the slow diastolic depolarisation of the SAN cells in the pathways generated a conduction velocity of 0.043 m/s in our model, which is comparable to the experimental estimates of 3 to 12 cm/s by Fedorov et al. . Although the hypothesised fibrosis is one example, other SAN fibrosis configurations that promote micro re-entry are also possible . The simulations of indicate that SAN micro re-entry may persist due to its shielding from atrial hyperpolarisation or other atrial excitations. Of course, more numerous small patches of fibrosis may also be a causal factor of smaller re-entrant circuits, as observed in idealised atrial tissue simulation of fibrosis . In addition, in the presence of SEPs, the re-entry within the SAN is associated with complex SAN-atrial electrical interaction promoting a more rapid tachycardia (period = 550 ms) as compared to a re-entry purely circulating around the SAN. From the simulations for micro re-entry, it is clear that an insulating border is necessary for shielding the SAN rapid excitation from dissipating into the surrounding atrial tissue. 4.4 Macro re-entry Similar to micro re-entry, the pharmacological agents and pacing protocols used to induce macro re-entry in the experiments setups may themselves promote fibrosis. In our simulations, we hypothesise that atrial fibrosis between the SAN and epicardial surface provides a sufficient micro-structural substrate for persistent macro re-entry. During macro re-entry, it can be seen that a potential border-SEPs configuration firstly insulates the SAN, but is also a crucial configuration that can explain complex atrial-SAN-atrial propagations. Further focused clinical and experimental examination is required to observe such rapid tachycardia, and also pin point the nature of the border-SEPs anatomy. The atrial fibrosis, however, was crucial in sustaining the atrial flutter. Although the atria induced SAN excitation was sufficiently rapid to suppress the SAN’s intrinsic pacemaking, it can be appreciated that the mechanism of macro re-entry relies on atrial fibrosis. The SAN activation was significantly more periodic in the presence of a border in contrast to the no border case, which is phenomenologically in agreement with the experimental findings . In addition, the existence of a border-SEPs configuration somewhat shields the SAN from external tachycardia. 4.5 Shift of LPS due to altered gap junction coupling Whereas electrophysiological alterations have been implicated in the LPS shift [ , , ], the presented model permitted a new correlation between intra-SAN leading pacemaker site and SAN cell-cell coupling microstructure. Of course, to simulate the shift of leading pacemaker outside of the SAN will require additional components to the presented model which may include sympathetic and parasympathetic stimulation as well as biophysically detailed electrophysiology. In the LPS shift simulation of , we demonstrated an additional factor contributing to the shift of LPS location. The mechanism of LPS shift to outside of the SAN may not involve a simple micro-structural alteration, but may have to be accompanied by electrophysiological alterations as well.
To the best of our knowledge, we have presented here the first SAN electro-anatomical 3D model of the human. It incorporates known and hypothesised anatomical features and is capable of presenting plausible arrhythmia mechanisms which can be tested in the laboratory and clinic. The model was used to simulate multiple electrical events to identify prime factors anchored in anatomy and micro-structure.
Baseline SAN pacemaking simulation using our 3D model shows that the micro-structure consisting of a SAN cell-cell coupling gradient regulates LPS location. The assumed random electrical heterogeneity conferred robustness to where the SAN waves originated, it was not found necessary to include a further SAN electrical gradient. Whereas the optical mapping experiments point towards conduction pathways originating from the SAN where they form SEPs, the gradients hypothesis requires a revision in light of the new data [ , , ].
In a previous study, micro re-entry has been simulated within a large block of pacemaker tissue in the absence of SAN fibrosis . However, we can appreciate that realistic SAN ( ) is small which may be incapable of containing a rotating scroll wave with a large wavelength. Factors such as intra-SAN fibrosis were found to be necessary to preserve micro re-entry. Indeed, pharmacological agents induce electrophysiological as well as microstructural (i.e. fibrosis) alterations . A large non-conducting region created a pathway for circular propagation in our model. The pathway consists of SAN cells whose excitation depends on a slow diastolic depolarisation. A weaker diffusion as compared to the atrial part combined with the slow diastolic depolarisation of the SAN cells in the pathways generated a conduction velocity of 0.043 m/s in our model, which is comparable to the experimental estimates of 3 to 12 cm/s by Fedorov et al. . Although the hypothesised fibrosis is one example, other SAN fibrosis configurations that promote micro re-entry are also possible . The simulations of indicate that SAN micro re-entry may persist due to its shielding from atrial hyperpolarisation or other atrial excitations. Of course, more numerous small patches of fibrosis may also be a causal factor of smaller re-entrant circuits, as observed in idealised atrial tissue simulation of fibrosis . In addition, in the presence of SEPs, the re-entry within the SAN is associated with complex SAN-atrial electrical interaction promoting a more rapid tachycardia (period = 550 ms) as compared to a re-entry purely circulating around the SAN. From the simulations for micro re-entry, it is clear that an insulating border is necessary for shielding the SAN rapid excitation from dissipating into the surrounding atrial tissue.
Similar to micro re-entry, the pharmacological agents and pacing protocols used to induce macro re-entry in the experiments setups may themselves promote fibrosis. In our simulations, we hypothesise that atrial fibrosis between the SAN and epicardial surface provides a sufficient micro-structural substrate for persistent macro re-entry. During macro re-entry, it can be seen that a potential border-SEPs configuration firstly insulates the SAN, but is also a crucial configuration that can explain complex atrial-SAN-atrial propagations. Further focused clinical and experimental examination is required to observe such rapid tachycardia, and also pin point the nature of the border-SEPs anatomy. The atrial fibrosis, however, was crucial in sustaining the atrial flutter. Although the atria induced SAN excitation was sufficiently rapid to suppress the SAN’s intrinsic pacemaking, it can be appreciated that the mechanism of macro re-entry relies on atrial fibrosis. The SAN activation was significantly more periodic in the presence of a border in contrast to the no border case, which is phenomenologically in agreement with the experimental findings . In addition, the existence of a border-SEPs configuration somewhat shields the SAN from external tachycardia.
Whereas electrophysiological alterations have been implicated in the LPS shift [ , , ], the presented model permitted a new correlation between intra-SAN leading pacemaker site and SAN cell-cell coupling microstructure. Of course, to simulate the shift of leading pacemaker outside of the SAN will require additional components to the presented model which may include sympathetic and parasympathetic stimulation as well as biophysically detailed electrophysiology. In the LPS shift simulation of , we demonstrated an additional factor contributing to the shift of LPS location. The mechanism of LPS shift to outside of the SAN may not involve a simple micro-structural alteration, but may have to be accompanied by electrophysiological alterations as well.
The data presented in this study must be interpreted within the confines of the model limitations, as well as experimental data limitations. 5.1 Realistic estimates from detailed electrophysiology In this study, a simple Fenton Karma model was used to simulate electrical excitation throughout the model’s tissue types. The use of the three variable phenomenological cell models permitted rapid demonstration of the observed experimental phenomena. However, future studies using ionically detailed cell models for the human SAN and atrium are required. The use of ionically detailed cell models will provide improved implementation of pacing rates, action potential durations, take off potentials, and upstroke velocities, and provide better estimates of refractoriness and wavelengths. The detailed electrophysiology will also permit a better implementation of diffusive inter-cellular coupling, which is usually estimated based on observed conduction velocity. As future data for secondary pacemakers such as paranodal area become available, they will be also be incorporated in future studies. We appreciate that detailed cell models of human SAN and atria will further improve the simulation of basal and altered electrical activity due to acetylcholine and isoprenaline to permit a better reproduction of the experimental phenomena of micro- and macro- re-entry. It will also assist in more accurate estimation of SAN and atrial conduction properties in the presented model. The source-sink relationship between the two tissue types in respect of a potential insulating border-SEPs anatomical configuration should also be explored using the detailed cell models. It is thought that the paranodal area acts as a secondary pacemaker in the human heart . In the presented model, we therefore assigned a much slower cycle length to the paranodal area’s cell type as compared to the SAN, to permit the SAN to overdrive suppress the paranodal area during physiological pacemaking. The electrical properties of the paranodal area may be better assessed by use of validated electrophysiological models in tissue types that surround it, i.e. SAN and atrial tissue. However, the phenomena of interest could be simulated using the simple Fenton-Karma dynamics, and the electrophysiological information content of the presented model will be extended in future work. 5.2 Anatomical model limitations A limitation of our model may be that the SAN activation we simulated is 5–10 ms, in contrast to the much longer experimentalist’s observation of 40–80 ms . An important reason for the difference could be that the conduction velocity was far slower in the experimental preparations. Another reason could be that the electrophysiology simulated in this study cannot capture the ion channel detail present in real right atrial preparations. In either case, the conduction velocity and closer matching of modelled electrophysiology to actual SAN preparations will affect the SAN’s pacemaking rate as well as numerical values of periods of re-entry. It may also affect the complexity of the SAN-atrial propagations that have been observed in our simulations. The locations of SEPs in our model were based purely on diagrammatic representations from past studies in the literature rather than being directly mapped from experimental images onto the 3D anatomy. The locations and inter-SEP distance will affect the dynamics of the simulated re-entrant phenomena in this study, but the overall results are expected to be qualitatively similar. Whereas a biophysically detailed accurate model is under development, the presented model is phenomenological and aimed to establish correlations between SAN anatomy and electrical function. The 3D model’s spatial extent constrained the simulation of more realistic event simulations. In the future, we will incorporate nearby blood vessel ostia that will act as sources of ectopy as well as tachycardia pinning. A larger atrial tissue region will also permit simulation of realistic scroll wave dynamics based on clinically measured action potential durations. 5.3 Anatomical anisotropy Due to the focus of this study being the electro-anatomy, fibre orientation micro-structure was omitted. However, it is expected that fibrosis will affect fibre orientation. In the future, and especially in spatially larger models, it is relevant to incorporate fibre orientation information based on detailed imaging data or theoretical models [ , , ]. 5.4 Inter-cellular diffusive coupling in the paranodal area and SAN The study of the paranodal area is yet nascent [ , , ]. To the best of our knowledge, experimental action potential recordings and pacemaking properties as well as conduction velocity estimates are unavailable. This necessitated implementation of realistic but arbitrary diffusion-electrophysiological properties. It may be noted that our choice of parameters for the paranodal within the 3D model permitted the simulation of several experimentally observed SAN complex phenomena. The diffusion gradient in the SAN region of our model permits pacing of surrounding atrial tissue. The gradient may be the result of altering gap junction protein expression from centre to periphery , or due to other factors such as fibroblast heterogeneity both of which regulate conduction velocity. As the effects of gap junctions are summarised by the diffusion in the model, a gradient may be justified as implemented in previous modelling studies. However, it should be noted that the exact mathematical form of the increase of conduction from centre to periphery is yet to be estimated experimentally. Whereas a distance measure has been used in this study, others have used a spectrum of different equations and formulations [ , , ]. Future simulations are required to assess which formulations of SAN conduction heterogeneity permit reproduction of the exit pathway related phenomena. It is also important to assess the critical threshold at which atrial pacing becomes possible, especially within our model where the exit pathways provide a spatially limited electrical coupling between the SAN and atrial parts. 5.5 Further investigation for fibrosis validation In contrast to the experimental studies that demonstrated initiation as well as persistence of micro-reentry , the present study focused on one simple anatomical configuration, in terms of a single central SAN fibrosis patch, that could permit persistence of an artificially induced micro-reentry. The limited implementation of fibrosis in our model SAN may explain the differences between our estimates of micro-reentry attributes to those in the experimental studies. However, the nature of fibrosis and its consequences on electrical conduction behaviour is complex as reflected by multiple ongoing computational-experimental studies. The extensive electro-anatomical investigations undertaken by several experimental groups indicate that atrial fibrosis, and cardiac fibrosis in general, falls in three categories: diffuse, patchy, or compact . The qualitative data relevant to this study indicates that SAN fibrosis is patchy in hearts where SAN micro-reentry could be observed . This observation correlates with known computational results that a patchy fibrosis promotes genesis and persistence of re-entry . The simulation of re-entry due to diffuse fibrosis may occur with a small probability as well as be relatively short lived, unless very specific fibrosis within the whole 3D anatomy conditions prevail . However, it is also known that diffuse fibrosis assists in stabilising re-entrant arrhythmia by causing overall slowing of the propagating wave. In this study, we opted to establish a simple yet robust anatomical substrate for producing micro-reentry. The interplay between hitherto unknown SAN fibrosis patch size, amount of total fibrosis, size of SEPs, and other factors has not been estimated as part of this study, and will form the subject of future computational, or experimental-computational studies. A further factor that is relevant to more fully understand SAN function is the myocyte-fibroblast interaction , especially in our 3D model with SEPs.
In this study, a simple Fenton Karma model was used to simulate electrical excitation throughout the model’s tissue types. The use of the three variable phenomenological cell models permitted rapid demonstration of the observed experimental phenomena. However, future studies using ionically detailed cell models for the human SAN and atrium are required. The use of ionically detailed cell models will provide improved implementation of pacing rates, action potential durations, take off potentials, and upstroke velocities, and provide better estimates of refractoriness and wavelengths. The detailed electrophysiology will also permit a better implementation of diffusive inter-cellular coupling, which is usually estimated based on observed conduction velocity. As future data for secondary pacemakers such as paranodal area become available, they will be also be incorporated in future studies. We appreciate that detailed cell models of human SAN and atria will further improve the simulation of basal and altered electrical activity due to acetylcholine and isoprenaline to permit a better reproduction of the experimental phenomena of micro- and macro- re-entry. It will also assist in more accurate estimation of SAN and atrial conduction properties in the presented model. The source-sink relationship between the two tissue types in respect of a potential insulating border-SEPs anatomical configuration should also be explored using the detailed cell models. It is thought that the paranodal area acts as a secondary pacemaker in the human heart . In the presented model, we therefore assigned a much slower cycle length to the paranodal area’s cell type as compared to the SAN, to permit the SAN to overdrive suppress the paranodal area during physiological pacemaking. The electrical properties of the paranodal area may be better assessed by use of validated electrophysiological models in tissue types that surround it, i.e. SAN and atrial tissue. However, the phenomena of interest could be simulated using the simple Fenton-Karma dynamics, and the electrophysiological information content of the presented model will be extended in future work.
A limitation of our model may be that the SAN activation we simulated is 5–10 ms, in contrast to the much longer experimentalist’s observation of 40–80 ms . An important reason for the difference could be that the conduction velocity was far slower in the experimental preparations. Another reason could be that the electrophysiology simulated in this study cannot capture the ion channel detail present in real right atrial preparations. In either case, the conduction velocity and closer matching of modelled electrophysiology to actual SAN preparations will affect the SAN’s pacemaking rate as well as numerical values of periods of re-entry. It may also affect the complexity of the SAN-atrial propagations that have been observed in our simulations. The locations of SEPs in our model were based purely on diagrammatic representations from past studies in the literature rather than being directly mapped from experimental images onto the 3D anatomy. The locations and inter-SEP distance will affect the dynamics of the simulated re-entrant phenomena in this study, but the overall results are expected to be qualitatively similar. Whereas a biophysically detailed accurate model is under development, the presented model is phenomenological and aimed to establish correlations between SAN anatomy and electrical function. The 3D model’s spatial extent constrained the simulation of more realistic event simulations. In the future, we will incorporate nearby blood vessel ostia that will act as sources of ectopy as well as tachycardia pinning. A larger atrial tissue region will also permit simulation of realistic scroll wave dynamics based on clinically measured action potential durations.
Due to the focus of this study being the electro-anatomy, fibre orientation micro-structure was omitted. However, it is expected that fibrosis will affect fibre orientation. In the future, and especially in spatially larger models, it is relevant to incorporate fibre orientation information based on detailed imaging data or theoretical models [ , , ].
The study of the paranodal area is yet nascent [ , , ]. To the best of our knowledge, experimental action potential recordings and pacemaking properties as well as conduction velocity estimates are unavailable. This necessitated implementation of realistic but arbitrary diffusion-electrophysiological properties. It may be noted that our choice of parameters for the paranodal within the 3D model permitted the simulation of several experimentally observed SAN complex phenomena. The diffusion gradient in the SAN region of our model permits pacing of surrounding atrial tissue. The gradient may be the result of altering gap junction protein expression from centre to periphery , or due to other factors such as fibroblast heterogeneity both of which regulate conduction velocity. As the effects of gap junctions are summarised by the diffusion in the model, a gradient may be justified as implemented in previous modelling studies. However, it should be noted that the exact mathematical form of the increase of conduction from centre to periphery is yet to be estimated experimentally. Whereas a distance measure has been used in this study, others have used a spectrum of different equations and formulations [ , , ]. Future simulations are required to assess which formulations of SAN conduction heterogeneity permit reproduction of the exit pathway related phenomena. It is also important to assess the critical threshold at which atrial pacing becomes possible, especially within our model where the exit pathways provide a spatially limited electrical coupling between the SAN and atrial parts.
In contrast to the experimental studies that demonstrated initiation as well as persistence of micro-reentry , the present study focused on one simple anatomical configuration, in terms of a single central SAN fibrosis patch, that could permit persistence of an artificially induced micro-reentry. The limited implementation of fibrosis in our model SAN may explain the differences between our estimates of micro-reentry attributes to those in the experimental studies. However, the nature of fibrosis and its consequences on electrical conduction behaviour is complex as reflected by multiple ongoing computational-experimental studies. The extensive electro-anatomical investigations undertaken by several experimental groups indicate that atrial fibrosis, and cardiac fibrosis in general, falls in three categories: diffuse, patchy, or compact . The qualitative data relevant to this study indicates that SAN fibrosis is patchy in hearts where SAN micro-reentry could be observed . This observation correlates with known computational results that a patchy fibrosis promotes genesis and persistence of re-entry . The simulation of re-entry due to diffuse fibrosis may occur with a small probability as well as be relatively short lived, unless very specific fibrosis within the whole 3D anatomy conditions prevail . However, it is also known that diffuse fibrosis assists in stabilising re-entrant arrhythmia by causing overall slowing of the propagating wave. In this study, we opted to establish a simple yet robust anatomical substrate for producing micro-reentry. The interplay between hitherto unknown SAN fibrosis patch size, amount of total fibrosis, size of SEPs, and other factors has not been estimated as part of this study, and will form the subject of future computational, or experimental-computational studies. A further factor that is relevant to more fully understand SAN function is the myocyte-fibroblast interaction , especially in our 3D model with SEPs.
The data from this study indicates that the insulating border-SEPs configuration plays a crucial role in regular physiopathological SAN conduction. Inclusion of the configuration into 3D models may help to explain observations from other studies. Further experimental-computational exploration is required to translate the findings for clinical relevance. The 3D model can be obtained from the authors. The anatomical data used to simulate Figs and is provided as an electronic supplement zip file/online repository.
S1 Fig Role of modelling parameters ( ) on SAN pacemaker and atrial action potentials. A: Range of SAN action potentials used to simulate basal pacemaking. B: Range of SAN action potentials used to simulate the fast SAN pacemaking, to simulate the effect of ISO. C: Range of SAN action potentials used to simulate the slow SAN pacemaking, to simulate the effect of Ach. (PDF) Click here for additional data file. S2 Fig Geometries of SAN segmented from imaging data (top row) and modified ellipsoidal SAN (bottom row). The columns show views in endo to epi transmural (left column), epicardial (middle column), and the epi to endo transmural (right column) directions. (PDF) Click here for additional data file. S3 Fig Illustration of the filament tracing method. A: Representative scroll wave in the 3D model. The “X” shows the arbitrarily chosen action potential recording location in the atrial part of the model. B: The recorded action potential at location “X”. Correlation was computed for several values of delay, τ, between voltages at a fixed time, t, and voltages after a delay at time t+ τ. C: Correlation between voltage at time t and time t+ τ of the recorded action potential. The optimal delay between consecutive frames was identified as 15.2 ms from the correlation. D: A phase plot of the 10 s long action potential was used to identify the parameters to be used in computation of phase. V*(t) = 0.509, V*(t+ τ) = 0.59 were identified. E: The colour coding shows the phase of a representative scroll wave between–π and +π. Solid red shows the SAN to provide an anatomical reference to the reader. The phase singularity is shown as the black transmural filament . (PDF) Click here for additional data file. S1 Section Cell model equations and parameters. (PDF) Click here for additional data file. S2 Section Objective method for filament tracking. (PDF) Click here for additional data file. S1 Table Model parameter values in the cell types of human SAN model. Control values are given in black, and ISO (short SAN AP, short atrial AP) as well as Ach values (long SAN AP, short atrial AP) are given in red. (PDF) Click here for additional data file.
|
The association between health literacy and quality of life of patients with type 2 diabetes mellitus: A cross-sectional study | c638de1c-4e97-4a91-b7be-6908a851cb67 | 11527217 | Health Literacy[mh] | Diabetes Mellitus (DM) is a cluster of metabolic diseases characterized by elevated blood glucose levels . DM is caused by abnormalities in insulin action, insulin secretion, or both. The prolonged elevation of blood glucose levels in DM is associated with various complications including failure and dysfunction of different organs, such as nerves, kidneys, blood vessels, heart, and eyes . According to data from the International Diabetes Federation Diabetes Atlas, the prevalence of diabetes was found to be 537 million people worldwide in 2021 . The prevalence is increasing and is projected to reach 578 million by 2030 and approximately over 700 million DM patients by 2045. Prevalence rates are much higher in high-income countries compared to low-income countries . In 2021, the global prevalence of DM in urban areas (12.1%) was found to be higher than in rural regions (8.3%) . The prevalence of DM in Jordan is considered one of the highest globally. For example, in 2017 the prevalence was 23.7% in 2017 . Furthermore, the incidence is increasing, the prevalence of DM in 1994 among men in Jordan aged 25 years or older was 14.2% and increased to 18.3% in 2004, and reached 26.8% and 32.4% in 2009 and 2017, respectively . Health literacy (HL) is defined by the World Health Organization (WHO) as “the cognitive and social skills which determine the motivation and ability of individuals to gain access to, understand, and use information in ways that promote and maintain good health” . HL has become increasingly important for economic, social, and health development . Diabetes-related HL is the degree to which DM patients have the necessary abilities and skills to seek, analyze, understand, enumerate, and communicate DM-related information in their daily lives, clinics, and other healthcare settings . HL-driven interventions in DM patients have been found to play an important role in achieving glycemic control and enhancing DM self-management outcomes . Knowledge of self-care activities and how to seek and access health-related information is crucial for DM patients, as health systems are becoming increasingly complex . HL has been shown to improve DM patients’ health outcomes by enabling them to engage in beneficial health-related activities and perform appropriate self-care practices . Quality of life (QOL) is a significant health outcome; it represents the ultimate aim of all treatments and health-related interventions . QOL is a cornerstone of evaluating healthcare practice, modern medicines, and other health-related interventions . Health-related QOL is a measure of an individual’s perceived mental, physical, and social well-being . DM patients have lower QOL than individuals with no chronic diseases, yet they tend to experience better QOL compared to patients with most other chronic illnesses . DM patients have lower QOL than individuals with no chronic diseases, yet they tend to experience better QOL compared to patients with most other chronic illnesses . According to a cross-sectional study conducted in the Al-Ahsa region of Saudi Arabia, the main problems that negatively impact the QOL of diabetic patients are depression/anxiety, mobility problems, and pain/discomfort . Furthermore, a cross-sectional study carried out in northern Thailand reported that most DM patients (49.4%) had poor to moderate QOL . A recent cross-sectional study on type 2 diabetes patients in Iran found that improvement in HL were associated with better QOL . To date, there are no published studies evaluating the impact of health literacy on QOL among diabetic Jordan patients. Understanding this relationship is important in addressing QOL in this patient group. Therefore, the present study aimed, and for the first time, to examine the role of HL as a predictor of QOL among patients with type 2 diabetes in Jordan.
This study involved 400 patients with type 2 diabetes attending the endocrinology department at the outpatient clinic of Al Basheer Hospital, one of the largest public hospitals in Jordan, located in East Amman and serving a significant number of patients. The data were collected from the hospital between 1 st of August and 28 th of December 2023. The inclusion criteria included patients diagnosed with type 2 diabetes for at least one year, being 18 years or older, literate, with the ability to read and write, since the tools used in this study are self-administered and agreeing to participate in the study by providing written informed consent. The files of the patients scheduled for follow-up appointments the next day were reviewed, and only those meeting the inclusion criteria were considered. The researchers (S.A and Z.A) briefly stated the study objectives, confidentiality of the collected information, and the participant’s right to withdraw from the study at any time. Additionally, patients were informed that self-completing the questionnaire would take 10 minutes approximately. Each participant also completed an informed consent form. The study adhered to the Declaration of Helsinki’s ethical guidelines. Ethical approval was obtained from the Al-Zaytoonah University of Jordan (Ref#1/4/2022–2023). Data collection and study instruments The Jordanian Diabetic Health Literacy Questionnaire (JDHLQ) was adopted for this study , which is a validated tool used to evaluate diabetic patients’ health literacy in the Arabic-speaking population. It has two sections in addition to the sociodemographic data collection sheet. Upon collecting patients’ sociodemographic information, including age, gender, educational level, monthly income, and marital status, the first section is composed of five items focusing on the informative domain of health literacy, evaluating patients’ ability to assess, understand, and use information about type 2 diabetes. The second section consists of items assessing communicative aspects of health literacy and patients’ ability to effectively communicate about their disease, including their ability to explain the rationale for a diabetic diet, explain his/her condition to healthcare professionals, and ask them questions about type 2 doabetes. These sections collectively consist of 8 questions on a four-point Likert scale with the maximum achievable score being 32. A higher total score on this scale represents a better ability in DM-related health literacy. Data on the patient’s medications and HbA1c values on the same day of the visit were collected from the patient’s files. Additionally, the EuroQol-5D (EQ-5D) , is a validated tool was used to assess QOL in Jordan . This is composed of five items assessing five dimensions including usual activities, self-care, mobility, anxiety/depression, and pain/discomfort. Each dimension has three levels of response or perceived problem (Level 1: no problems, Level 2: some problems, Level 3: extreme problems/inability to perform). Each unique health state is scored on a numerical scale from -0.594 to 1. A score of one represents a perfect health state while a score of zero and lower represents death and “worse than death” (WTD), respectively . Sample size calculation In order to calculate the minimum sample size required to produce a regression model with adequate statistical power, the 50 + 8P equation was adopted, where P represents the number of predictors. The study examined the association of 11 variables with patients’ EQ-5D scores. Therefore, the minimum required sample size was 138 patients. Statistical analysis Data analysis was performed using the Statistical Package for the Social Sciences (SPSS), version 26.Continuous variables were presented as medians and 25–75 percentiles, while categorical variables were presented as frequencies and percentages. The internal consistency of the EQ5-D and the informative and communitive domains of the JDHLQ were evaluated by computing Cronbach’s alphas. The normality of EQ5-D index scores was assessed using Q-Q plots. Since the data was not normally distributed, nonparametric tests were conducted, along with a quantile regression analysis to examine the association between EQ5-D index scores and various variables, including gender, age, monthly income, marital status, education levels, insurance status, HbA1c, medications (Insulin, Metformin, and DPP-4 inhibitors), and JDHLQ score. Multicollinearity between the different predictors were evaluated by computing VIF values and all the values were less than 3. The R 2 value was measured to assess the fitness of the produced model. The significance level was set at a threshold of p < 0.05.
The Jordanian Diabetic Health Literacy Questionnaire (JDHLQ) was adopted for this study , which is a validated tool used to evaluate diabetic patients’ health literacy in the Arabic-speaking population. It has two sections in addition to the sociodemographic data collection sheet. Upon collecting patients’ sociodemographic information, including age, gender, educational level, monthly income, and marital status, the first section is composed of five items focusing on the informative domain of health literacy, evaluating patients’ ability to assess, understand, and use information about type 2 diabetes. The second section consists of items assessing communicative aspects of health literacy and patients’ ability to effectively communicate about their disease, including their ability to explain the rationale for a diabetic diet, explain his/her condition to healthcare professionals, and ask them questions about type 2 doabetes. These sections collectively consist of 8 questions on a four-point Likert scale with the maximum achievable score being 32. A higher total score on this scale represents a better ability in DM-related health literacy. Data on the patient’s medications and HbA1c values on the same day of the visit were collected from the patient’s files. Additionally, the EuroQol-5D (EQ-5D) , is a validated tool was used to assess QOL in Jordan . This is composed of five items assessing five dimensions including usual activities, self-care, mobility, anxiety/depression, and pain/discomfort. Each dimension has three levels of response or perceived problem (Level 1: no problems, Level 2: some problems, Level 3: extreme problems/inability to perform). Each unique health state is scored on a numerical scale from -0.594 to 1. A score of one represents a perfect health state while a score of zero and lower represents death and “worse than death” (WTD), respectively .
In order to calculate the minimum sample size required to produce a regression model with adequate statistical power, the 50 + 8P equation was adopted, where P represents the number of predictors. The study examined the association of 11 variables with patients’ EQ-5D scores. Therefore, the minimum required sample size was 138 patients.
Data analysis was performed using the Statistical Package for the Social Sciences (SPSS), version 26.Continuous variables were presented as medians and 25–75 percentiles, while categorical variables were presented as frequencies and percentages. The internal consistency of the EQ5-D and the informative and communitive domains of the JDHLQ were evaluated by computing Cronbach’s alphas. The normality of EQ5-D index scores was assessed using Q-Q plots. Since the data was not normally distributed, nonparametric tests were conducted, along with a quantile regression analysis to examine the association between EQ5-D index scores and various variables, including gender, age, monthly income, marital status, education levels, insurance status, HbA1c, medications (Insulin, Metformin, and DPP-4 inhibitors), and JDHLQ score. Multicollinearity between the different predictors were evaluated by computing VIF values and all the values were less than 3. The R 2 value was measured to assess the fitness of the produced model. The significance level was set at a threshold of p < 0.05.
The present study enrolled 400 patients with type 2 diabetes (68.8%female). shows the demographic characteristics of the participants. The median age was 58 (50–64) years, with the majority being married (89.2%). A significant number had only elementary education (42.5%), and most had health insurance (79.0%). Furthermore, 81.2% of patients earned less than 500 Jordanian dinars JD per month. Metformin was the most frequently used medication (86.7%), followed by Insulin (37.7%), while Thiazolidinediones (TZDs) were the least used (1.8%). presents the frequency of responses to diabetes-related information and diabetes-related communication items. The ability with the highest score was for item “Understand the written information I receive from my healthcare provider”, as (47.0%) scored their ability as 3, and (18.50%) scored their ability at 4, and the least ability was reported for the item “Evaluate the accuracy of diabetes-related information I obtain”, as only 13.5% gave themselves a rating of 4. The median for the JDHLQ score was 22 (18–25) out of a maximum possible score of 32. The Cronbach’s alphas for the diabetes-related Informative and Communicative domains were 0.83 and 0.81 respectively, indicating high internal consistency. Patients’ responses to the EQ5-D items are displayed in . Regarding the mobility dimension, more than half of the patients answered, “I have some problems in walking about” (58.8%). Most had no problems with self-care (66%). However, the highest percentage of patients had some problems with performing their usual activities (48.3%), and most had moderate pain or discomfort (47.5%). Moreover, most of the patients were moderately anxious or depressed (53.5%). The median EQ5-D index score was 0.66 (0.41–0.78). Cronbach’s alpha of the EQ5-D was 0.8. Findings from quantile regression revealed that higher JDHLQ scores were significantly associated with higher EQ5-D scores (0.012, 95% CI (0.006–0.018), p<0.001). Conversely, as patients’ age increased, their QOL scores significantly decreased (-0.004, 95%CI (-0.006, -0.001), p = 0.002). Additionally, patients with only an elementary education had significantly lower EQ5-D scores compared to those who had postgraduate education (-0.106, 95%CI (-0.190, -0.023), p = 0.013). The R 2 value was 0.24, indicating that 24% of the variance in EQ-5D scores was explained by the model.
Type 2 diabetes is a chronic disease that can have a serious negative impact on patients’ social, emotional, and physical health. Understanding diabetic patients’ QOL offers important insights into how their condition impacts their day-to-day functioning, mental health, and other aspects of their lives . Consequently, healthcare professionals can better meet the unique needs of diabetic patients by customizing interventions and support services, which will ultimately enhance their overall quality of life. Furthermore, evaluating QOL can point out areas of concern or areas that require more support, which can help inform healthcare policies and interventions meant to improve the well-being of people with type 2 diabetes. The International Diabetes Federation (IDF) identifies the Middle East as a key region for diabetes prevalence. While standard treatments exist, preventive measures must be tailored to local cultures. Despite shared language and religion, Middle Eastern countries exhibit significant cultural, economic, and healthcare diversity. Additionally, issues like war, forced migration, climate change, and political instability complicate healthcare delivery . Urbanization, socioeconomic development, sedentary lifestyles, and high consumption of fats and sugary foods have all contributed to increasing obesity and diabetes rates, presenting significant challenges for the region . Growing demands for individuals to take greater responsibility for their health have underscored the need for adequate health education. Low health literacy is seen as a significant obstacle to enhancing health outcomes. Research has consistently shown that individuals with low health literacy tend to have inadequate diabetes knowledge, engage in less effective self-management, experience poor blood glucose control, and incur higher healthcare costs . This issue is particularly important in the Middle East, where the rising prevalence of diabetes has become a critical concern. In a study of 256 patients with type 2 diabetes in Saudi Arabia, 27.3% exhibited marginal health literacy, while 35.5% had inadequate health literacy . Another study revealed that only 11% of individuals with type 2 diabetes attending outpatient clinics across the UAE demonstrated adequate health literacy levels . Thus, it is essential to develop policies and strategies that reflect the values and practices of each society. There is increasing evidence that links diabetes to a lower quality of life, with health literacy accounting for 47.5% of the variance in health-related QoL among diabetic patients . However, findings on the relationship between poor health literacy and lower health-related QoL in these patients have been inconsistent . The current study is the first study to assess the role of HL as a predictor of QOL among patients with type 2 doabetes in Jordan. Moderate QOL was found among the study subjects. In the current study, the median EQ-5D index score was 0.66 (0.41–0.78). Higher EQ-5D scores have been reported among diabetic patients in previous studies conducted in Jordan , Iran , India , Nigeria , China , Ethiopia , and Korea . The poor QOL found among type 2 patients in the present study highlights the importance of exploring the factors associated with reduced QOL in this patient group. The current study found a significant relationship between older age and poor QOL. Similar results have been reported in earlier studies . Older diabetic patients likely have a longer history of the disease, which increases their disease burden and their risk of complications, which can significantly lower their QOL. Additionally, they are more likely to have multiple comorbidities, which can further impair QOL. Patients with lower educational levels had significantly lower EQ-5D scores than those with higher education levels in the current study. Several previous studies have found better QOL among diabetic patients with higher educational levels . Higher-educated patients typically have a greater understanding of their disease, including available treatment options and the possible consequences of diabetes-related complications. As a result of their greater awareness, they may be more proactive in controlling the disease, following their treatment plans, and making lifestyle modifications that will enhance their QOL . Type 2 diabetes outcomes and management are significantly influenced by HL. Earlier research has shown that HL improves type 2 diabetic patients’ QOL, glycaemic control, and self-care practices . In the current study, higher JDHLQ scores were significantly associated with higher EQ5-D scores. A Chinese study conducted among patients with diabetic peripheral neuropathy showed that higher HL was significantly associated with improved QOL . Other studies have confirmed a positive relationship between HL and QOL among patients with type 2 diabetes in Burkina Faso , Saudi Arabia , and Malaysia . This is likely the case because patients with higher HL may have a better understanding of their disease and how to manage it, resulting in more effective self-care practices, better health outcomes, and higher QOL. Higher HL may also help patients communicate more effectively with healthcare providers, allowing them to receive optimal support and treatment. The present study revealed that most participants demonstrated moderate proficiency in understanding and communicating diabetes-related information Improving HL through targeted educational interventions could improve patients’ QoL Limitations and future research The findings of the current study are subject to recall and social desirability biases since part of the study results were derived from self-reported data. The participants who were interested in the study’s aims were more encouraged to enrol in the study, which may cause selection bias. Additionally, the results were based solely on one hospital in Jordan. However, Al-Basheer Hospital is one of the largest public hospitals in the country and serves a substantial number of patients. The current study was limited to type 2 diabetes patients and future studies assessing health literacy among patients with type 1 diabetes are deemed necessary.
The findings of the current study are subject to recall and social desirability biases since part of the study results were derived from self-reported data. The participants who were interested in the study’s aims were more encouraged to enrol in the study, which may cause selection bias. Additionally, the results were based solely on one hospital in Jordan. However, Al-Basheer Hospital is one of the largest public hospitals in the country and serves a substantial number of patients. The current study was limited to type 2 diabetes patients and future studies assessing health literacy among patients with type 1 diabetes are deemed necessary.
As the findings of the present study show that HL had a significant positive impact on patients’ QOL, it emphasizes the importance of including HL assessments and interventions in the diabetes care plans of patients in Jordan. In addition, patients with higher HL were found to have better QOL. In order to help patients better understand and manage their disease, healthcare professionals should identify those who have low HL and offer tailored education and support, thereby aiming to improve QOL and type 2 diabetes management outcomes.
S1 File Inclusivity in global research. (DOCX)
|
Effect of Growth Hormone on Branched‐Chain Amino Acids Catabolism in Males With Hypopituitarism | 7689f541-aa1c-4816-9772-c3fa525b3c71 | 11875759 | Biochemistry[mh] | Introduction Hypopituitarism is characterised by the partial or complete loss of anterior pituitary hormones, including GH, luteinising hormone (LH), follicle‐stimulating hormone (FSH), adrenocorticotropic hormone (ACTH) and thyrotropin (TSH) . Due to GH deficiency (GHD), hypopituitarism patients exhibit increased visceral adiposity, IR, dyslipidaemia and hyperglycaemia, increasing the incidence and mortality rate of cardiovascular diseases . Skeletal muscle, a target organ of GH, experiences atrophy and metabolic disorders in the absence of adequate GH levels . In adults with GHD, lean body mass (LBM) and muscle mass are reduced due to disrupted protein metabolism . GH replacement therapy in GHD stabilises protein metabolism by favouring protein synthesis pathways over amino acid oxidation . Muscle atrophy is associated with increased expression of muscle atrophy F‐box protein (MAFbx) and muscle‐specific RING finger 1 (MuRF1), which induce ubiquitination and proteasome‐mediated degradation of target proteins, resulting in rapid muscle mass loss . Russell‐Jones et al. observed that GH supplementation in growth hormone deficiency (GHD) subjects led to an increase in protein synthesis and a reduction in protein oxidation. GH replacement therapy was found to restore protein stabilisation by favouring amino acid utilisation in protein synthesis pathways, thereby ameliorating muscle mass loss. Despite these positive effects on protein metabolism, the specific role of GH in modulating the ubiquitin‐proteasome system during this process remains unknown. Branched‐chain amino acids (BCAAs), namely leucine, isoleucine and valine, are indispensable amino acids that mammals cannot synthesise de novo. Consequently, they must be acquired through dietary intake, as a subset of enzymes essential for their biosynthesis is lacking in human and other mammalian tissues. Amino acids derived from dietary proteins are transported through the circulation to skeletal muscle, where they play a pivotal role in synthesising essential proteins . Elevated concentrations of BCAAs have been implicated in the pathogenesis of IR, type 2 diabetes (T2D) and various cardiometabolic diseases . In parallel with the clinical features observed in obesity and T2D, hypopituitarism presents with characteristics such as central obesity, IR and an increased susceptibility to cardiovascular diseases. The exploration of the intricate interplay between BCAAs and metabolic disturbances in hypopituitarism will provide valuable insights into potential mechanisms underlying these clinical features. Several studies have proposed that the elevated circulating BCAAs observed in patients with IR may result from dysregulated BCAA oxidation pathways in adipose and hepatic tissues, primarily influenced by impaired functions of BCAT and BCKDH . In murine models of obesity and IR, levels of valine and leucine/isoleucine have been reported to increase by 20% and 14%, respectively. This rise in BCAAs has been linked to the downregulation of multiple enzymes in the oxidation pathway . This transcriptional downregulation of BCAA oxidation enzymes has also been observed in human participants with obesity, which can be reversed by weight loss surgery and accompanied by a decrease in circulating BCAA levels . In cases of obesity and IR, minimal changes are observed in BCKDH abundance in the liver. Instead, BCKDH activity is primarily impaired by the induction of BDK and repression of PPM1K, leading to hyperphosphorylation of BCKDH and subsequent inhibition of its enzymatic activities . Moreover, the transplantation of normal adipose tissue into mice lacking BCAT2 has been demonstrated to reduce circulating BCAA levels by 30%–50% . Collectively, these findings suggest an increase in circulating BCAAs in obesity and diabetes, potentially attributed to impaired BCAA oxidation pathways resulting from decreased expression or altered phosphorylation of BCAA oxidation enzymes. To investigate the impact of GH on BCAAs catabolism in males with hypopituitarism, the current study conducted a case–control investigation involving 133 individuals with hypopituitarism and 90 paired controls. Furthermore, we also established animal models of hypopituitarism (hypophysectomized rats). This study provides an in‐depth understanding of the mechanisms underlying GH's effect on BCAAs catabolism in hypopituitarism.
Materials and Methods 2.1 Participant Recruitment Patients and healthy controls were recruited at Ruijin Hospital in Shanghai, China, between January 2016 and December 2018. The recruitment process for the current study adhered to the same protocol as a previously published study , including congenital hypopituitarism and acquired hypopituitarism, with further exclusion criteria of patients with normal GH levels. The study protocol received approval from the Board of Medical Ethics at Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, China. Ethical considerations and patient confidentiality were maintained throughout the recruitment process and the entire duration of the study. 2.2 Hormone Replacement Plan of Patients With Hypopituitarism Physiologic dosages of glucocorticoids and/or thyroid hormone were administered after diagnosis (median diagnosis age 16.50 year). A subset of these patients had undergone GH replacement during their childhood, subsequently discontinuing the treatment for a minimum of 24 months. Gonadotropin treatment was administered to all patients with LH/FSH deficiency for at least 24 months. 2.3 Measurement of Biochemical Markers in Patients With Hypopituitarism Serum total cholesterol, high‐density lipoprotein cholesterol, low‐density lipoprotein cholesterol and triglycerides were measured by the automated enzymatic method on an autoanalyser (Beckman Coulter, California, USA). Plasma glucose was measured by the glucose oxidase method with the same autoanalyser. Serum insulin was measured by a commercially available RIA kit (Diagnostic Systems Laboratories, Minnesota, USA). IR was quantified using the homeostasis model assessment of insulin resistance (HOMA‐IR) index (HOMA‐IR = insulin [μU/mL] × glucose [mmol/L]/22.5). 2.4 Amino Acids Quantification in Patients With Hypopituitarism Through Untargeted Metabolomics Metabolic profiling of serum samples was conducted on an Agilent 1290 Infinity LC system (Agilent Technologies, California, USA) coupled with an AB SCIEX Triple TOF6600 system (AB SCIEX, California, USA) to measure the amino acid levels. Chromatographic separation was performed on an ACQUITY HSS T3 1.8 μm column. Variable importance in projection (VIP) values for each variable in the orthogonal partial least squares‐discriminant analysis (OPLS‐DA) model were calculated to identify metabolites contributing to classification. Variables with p ‐values < 0.05 and VIP values > 1 were considered statistically significant. Our method allows for a relative quantitative analysis of the AA profile through untargeted metabolomics based on peak areas. The levels of AAs were recorded based on the peak area. 2.5 Hypophysectomized Rat Models and GH Intervention Male Sprague–Dawley (SD) rats aged 3–4 weeks (weighing 70–80 g) were equally randomised into three groups. The hypophysectomy was performed according to the following procedure. A 1.5 to 2.5 cm incision was made along the midline of the neck of the rat, starting from the lower jaw nipple downward. The incision was made bluntly, following the direction of muscle fibres, to separate the subcutaneous tissue and salivary glands, exposing the trachea. The anterior neck fascia was opened on the left or right side of the midline, and the salivary glands were pulled aside. A syringe needle was inserted between the 3rd and 4th tracheal cartilage rings, and after the needle was withdrawn, a PE50 or PE90 tube was inserted along the needle track. The muscle tissue and trachea were pulled apart on both sides to fully expose the base of the skull. The meninges and adhering tissues on the base of the skull were scraped off to expose the sphenoparietal suture. A drill was used to create a hole in the base of the skull, allowing access to the cranial cavity. After removing the membrane covering the surface of the pituitary gland, a suction tube connected to a negative pressure pump (at 320–500 mmHg) was used to aspirate the pituitary gland. After haemostasis was achieved with cotton swabs and gauze in the operative area, the surgical retractor was removed. After the animal regained breathing, the endotracheal tube was removed. The sternocleidomastoid muscle was retracted to fully cover the tracheotomy incision, and the skin incision was sutured. Before the rat regained consciousness, attention should be paid to secretions in the oral cavity or trachea to prevent asphyxiation. After regaining consciousness, clean water should be provided and subcutaneous injection of 10–15 mL of Ringer's solution or glucose saline solution may be considered for rats with significant blood loss during surgery. Because the energy metabolism level of the hypophysectomized rat is lower, attention should be paid to insulation. The success criterion for the model was defined as a postoperative body weight gain of less than 10% after 2 weeks. The rats in the control group were subjected to a sham operation, which involved the administration of general anaesthesia, the cutting and subsequent suturing of the skin on the neck. After 2 weeks' recovery, one of the hypophysectomized groups was provided rhGH (0.1 mg/kg/day) (Genlei, Changchun, China) daily at a relatively fixed time of 11:00–12:00 am with subcutaneous injection, including weekends for 2 weeks . The other groups were injected with 100 μL physiological saline. The dose and duration of rhGH replacement were determined based on the physiological replacement doses of GH in adult GHD. Body weight and fasting blood samples were collected before and 2 weeks after the operation, and 2 weeks after rhGH intervention. At the end of the study, the overnight‐fasted rats were euthanized. The livers and extensor digitorum longus muscles were excised and frozen immediately in liquid nitrogen. Blood samples were collected, and plasma was obtained by centrifugation (2200× g , 4°C) and stored at −80°C. 2.6 Amino Acids Quantification in the Serum of Rats by Targeted Metabolomics The concentrations of serum amino acids were determined by high‐performance liquid chromatography/mass spectrometry. In brief, 80 μL of an ethylene diamine tetraacetic acid sample was deproteinized with 1 mL of methanol and subsequently purified through ion exchange columns. Statistical significance was determined for metabolites based on p ‐values < 0.05 and VIP values > 1. A heatmap visualising the differential expression of these metabolites was constructed across the time points. The LC–MS analysis was conducted by Applied Protein Technology Co. Ltd. (APTBIO, Shanghai, China). 2.7 Four‐Dimensional ( 4D )‐Label Free Phosphorylation Proteomics The protein concentration was quantified using the BCA Protein Assay Kit (Beyotime Biotechnology, Shanghai, China). For filter‐aided sample preparation, 200 μg of protein was mixed with 30 μL of SDT buffer. The phosphopeptides were enriched using the High‐Select Fe‐NTA Phosphopeptides Enrichment Kit (Thermo Scientific, Massachusetts, USA). LC–MS/MS analysis was performed on a timsTOF Pro mass spectrometer coupled to Nanoelute (Bruker, Massachusetts, USA) for 60 min. 4D label‐free phosphorylation proteomics was conducted at Applied Protein Technology Co. Ltd. The differentially expressed phosphoproteins (DEPPs) were identified with standards of p ‐values < 0.05 and VIP values > 1. The gene set enrichment analysis for differentially expressed phosphoproteins was performed using the Kyoto Encyclopedia of Genes and Genomes (KEGG) database, which was automatically generated by Metascape ( https://metascape.org ). 2.8 Western Blotting Tissue homogenates were lysed in RIPA lysis buffer supplemented with protease inhibitor cocktail (APExBio, Houston, USA). Approximately 50–70 μg protein was separated by 6%–12% SDS‐PAGE, transferred to PVDF membranes (Bio‐Rad, Hercules, USA), and probed with primary antibodies. The antibodies of BCAT1 D6D4K (88785), BCAT2 D8K3O (79764), BCKDH‐E1α E4T3D (90198) and Phospho‐BCKDH‐E1α (Ser293) E2V6B (40368) all sourced from Cell Signalling Technology (CST, Massachusetts, USA), while MURF1 ab183094 was acquired through Abcam (Abcam, Massachusetts, USA). 2.9 Calculation of Muscle Cross‐Sectional Area ( CSA ) The CSA was calculated assuming the cross‐section to be approximately elliptical. The formula for the area of an ellipse was used, A = π × L /2 × W /2, where A represents area, L is the major axis, W is the minor axis and π is approximately 3.14. The CSA was calculated for 30 cells that were approximately elliptical in shape, and the mean value was determined. One‐way ANOVA was employed to analyse the muscle CSA among three groups of animals. 2.10 Assessment of Cell Density in Skeletal Muscle To evaluate cell density in skeletal muscle, regions of 0.050 mm 2 were selected from HE‐stained images. The number of skeletal muscle cells within these regions was counted. For cells located at the edges of the selected areas, only those with more than half of their volume within the region were included in the count. Four random regions with the same areas were selected, and the mean number of cells was calculated from these counts. 2.11 Statistical Analysis The Kolmogorov–Smirnov statistical test was performed to assess data normality. Continuous variables were presented as the mean ± SD for normally distributed variables or medians (interquartile ranges) for the skewed variables. For multiple metabolites comparisons, including 13 amino acids, in metabolome analysis, ‘ q ‐value’ was used instead of the p ‐value, which has been adjusted using the False Discovery Rate (FDR) method. For comparison of multiple groups, one‐way analysis of variance (ANOVA) was used, followed by Tukey's honest significant difference post hoc test. The correlations between amino acids and metabolism parameters were assessed using Pearson's coefficient. The diagnostic abilities of valine and leucine in hypopituitarism patients were assessed with a receiver operating characteristic (ROC) curve and the area under the ROC curve (AUC). Analyses were performed with Graphpad Prism 8.0 (Graphpad, California, USA) or R software. The significance tests were two‐tailed, and statistical significance was set at p < 0.05.
Participant Recruitment Patients and healthy controls were recruited at Ruijin Hospital in Shanghai, China, between January 2016 and December 2018. The recruitment process for the current study adhered to the same protocol as a previously published study , including congenital hypopituitarism and acquired hypopituitarism, with further exclusion criteria of patients with normal GH levels. The study protocol received approval from the Board of Medical Ethics at Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, China. Ethical considerations and patient confidentiality were maintained throughout the recruitment process and the entire duration of the study.
Hormone Replacement Plan of Patients With Hypopituitarism Physiologic dosages of glucocorticoids and/or thyroid hormone were administered after diagnosis (median diagnosis age 16.50 year). A subset of these patients had undergone GH replacement during their childhood, subsequently discontinuing the treatment for a minimum of 24 months. Gonadotropin treatment was administered to all patients with LH/FSH deficiency for at least 24 months.
Measurement of Biochemical Markers in Patients With Hypopituitarism Serum total cholesterol, high‐density lipoprotein cholesterol, low‐density lipoprotein cholesterol and triglycerides were measured by the automated enzymatic method on an autoanalyser (Beckman Coulter, California, USA). Plasma glucose was measured by the glucose oxidase method with the same autoanalyser. Serum insulin was measured by a commercially available RIA kit (Diagnostic Systems Laboratories, Minnesota, USA). IR was quantified using the homeostasis model assessment of insulin resistance (HOMA‐IR) index (HOMA‐IR = insulin [μU/mL] × glucose [mmol/L]/22.5).
Amino Acids Quantification in Patients With Hypopituitarism Through Untargeted Metabolomics Metabolic profiling of serum samples was conducted on an Agilent 1290 Infinity LC system (Agilent Technologies, California, USA) coupled with an AB SCIEX Triple TOF6600 system (AB SCIEX, California, USA) to measure the amino acid levels. Chromatographic separation was performed on an ACQUITY HSS T3 1.8 μm column. Variable importance in projection (VIP) values for each variable in the orthogonal partial least squares‐discriminant analysis (OPLS‐DA) model were calculated to identify metabolites contributing to classification. Variables with p ‐values < 0.05 and VIP values > 1 were considered statistically significant. Our method allows for a relative quantitative analysis of the AA profile through untargeted metabolomics based on peak areas. The levels of AAs were recorded based on the peak area.
Hypophysectomized Rat Models and GH Intervention Male Sprague–Dawley (SD) rats aged 3–4 weeks (weighing 70–80 g) were equally randomised into three groups. The hypophysectomy was performed according to the following procedure. A 1.5 to 2.5 cm incision was made along the midline of the neck of the rat, starting from the lower jaw nipple downward. The incision was made bluntly, following the direction of muscle fibres, to separate the subcutaneous tissue and salivary glands, exposing the trachea. The anterior neck fascia was opened on the left or right side of the midline, and the salivary glands were pulled aside. A syringe needle was inserted between the 3rd and 4th tracheal cartilage rings, and after the needle was withdrawn, a PE50 or PE90 tube was inserted along the needle track. The muscle tissue and trachea were pulled apart on both sides to fully expose the base of the skull. The meninges and adhering tissues on the base of the skull were scraped off to expose the sphenoparietal suture. A drill was used to create a hole in the base of the skull, allowing access to the cranial cavity. After removing the membrane covering the surface of the pituitary gland, a suction tube connected to a negative pressure pump (at 320–500 mmHg) was used to aspirate the pituitary gland. After haemostasis was achieved with cotton swabs and gauze in the operative area, the surgical retractor was removed. After the animal regained breathing, the endotracheal tube was removed. The sternocleidomastoid muscle was retracted to fully cover the tracheotomy incision, and the skin incision was sutured. Before the rat regained consciousness, attention should be paid to secretions in the oral cavity or trachea to prevent asphyxiation. After regaining consciousness, clean water should be provided and subcutaneous injection of 10–15 mL of Ringer's solution or glucose saline solution may be considered for rats with significant blood loss during surgery. Because the energy metabolism level of the hypophysectomized rat is lower, attention should be paid to insulation. The success criterion for the model was defined as a postoperative body weight gain of less than 10% after 2 weeks. The rats in the control group were subjected to a sham operation, which involved the administration of general anaesthesia, the cutting and subsequent suturing of the skin on the neck. After 2 weeks' recovery, one of the hypophysectomized groups was provided rhGH (0.1 mg/kg/day) (Genlei, Changchun, China) daily at a relatively fixed time of 11:00–12:00 am with subcutaneous injection, including weekends for 2 weeks . The other groups were injected with 100 μL physiological saline. The dose and duration of rhGH replacement were determined based on the physiological replacement doses of GH in adult GHD. Body weight and fasting blood samples were collected before and 2 weeks after the operation, and 2 weeks after rhGH intervention. At the end of the study, the overnight‐fasted rats were euthanized. The livers and extensor digitorum longus muscles were excised and frozen immediately in liquid nitrogen. Blood samples were collected, and plasma was obtained by centrifugation (2200× g , 4°C) and stored at −80°C.
Amino Acids Quantification in the Serum of Rats by Targeted Metabolomics The concentrations of serum amino acids were determined by high‐performance liquid chromatography/mass spectrometry. In brief, 80 μL of an ethylene diamine tetraacetic acid sample was deproteinized with 1 mL of methanol and subsequently purified through ion exchange columns. Statistical significance was determined for metabolites based on p ‐values < 0.05 and VIP values > 1. A heatmap visualising the differential expression of these metabolites was constructed across the time points. The LC–MS analysis was conducted by Applied Protein Technology Co. Ltd. (APTBIO, Shanghai, China).
Four‐Dimensional ( 4D )‐Label Free Phosphorylation Proteomics The protein concentration was quantified using the BCA Protein Assay Kit (Beyotime Biotechnology, Shanghai, China). For filter‐aided sample preparation, 200 μg of protein was mixed with 30 μL of SDT buffer. The phosphopeptides were enriched using the High‐Select Fe‐NTA Phosphopeptides Enrichment Kit (Thermo Scientific, Massachusetts, USA). LC–MS/MS analysis was performed on a timsTOF Pro mass spectrometer coupled to Nanoelute (Bruker, Massachusetts, USA) for 60 min. 4D label‐free phosphorylation proteomics was conducted at Applied Protein Technology Co. Ltd. The differentially expressed phosphoproteins (DEPPs) were identified with standards of p ‐values < 0.05 and VIP values > 1. The gene set enrichment analysis for differentially expressed phosphoproteins was performed using the Kyoto Encyclopedia of Genes and Genomes (KEGG) database, which was automatically generated by Metascape ( https://metascape.org ).
Western Blotting Tissue homogenates were lysed in RIPA lysis buffer supplemented with protease inhibitor cocktail (APExBio, Houston, USA). Approximately 50–70 μg protein was separated by 6%–12% SDS‐PAGE, transferred to PVDF membranes (Bio‐Rad, Hercules, USA), and probed with primary antibodies. The antibodies of BCAT1 D6D4K (88785), BCAT2 D8K3O (79764), BCKDH‐E1α E4T3D (90198) and Phospho‐BCKDH‐E1α (Ser293) E2V6B (40368) all sourced from Cell Signalling Technology (CST, Massachusetts, USA), while MURF1 ab183094 was acquired through Abcam (Abcam, Massachusetts, USA).
Calculation of Muscle Cross‐Sectional Area ( CSA ) The CSA was calculated assuming the cross‐section to be approximately elliptical. The formula for the area of an ellipse was used, A = π × L /2 × W /2, where A represents area, L is the major axis, W is the minor axis and π is approximately 3.14. The CSA was calculated for 30 cells that were approximately elliptical in shape, and the mean value was determined. One‐way ANOVA was employed to analyse the muscle CSA among three groups of animals.
Assessment of Cell Density in Skeletal Muscle To evaluate cell density in skeletal muscle, regions of 0.050 mm 2 were selected from HE‐stained images. The number of skeletal muscle cells within these regions was counted. For cells located at the edges of the selected areas, only those with more than half of their volume within the region were included in the count. Four random regions with the same areas were selected, and the mean number of cells was calculated from these counts.
Statistical Analysis The Kolmogorov–Smirnov statistical test was performed to assess data normality. Continuous variables were presented as the mean ± SD for normally distributed variables or medians (interquartile ranges) for the skewed variables. For multiple metabolites comparisons, including 13 amino acids, in metabolome analysis, ‘ q ‐value’ was used instead of the p ‐value, which has been adjusted using the False Discovery Rate (FDR) method. For comparison of multiple groups, one‐way analysis of variance (ANOVA) was used, followed by Tukey's honest significant difference post hoc test. The correlations between amino acids and metabolism parameters were assessed using Pearson's coefficient. The diagnostic abilities of valine and leucine in hypopituitarism patients were assessed with a receiver operating characteristic (ROC) curve and the area under the ROC curve (AUC). Analyses were performed with Graphpad Prism 8.0 (Graphpad, California, USA) or R software. The significance tests were two‐tailed, and statistical significance was set at p < 0.05.
Results 3.1 Physical and Biological Characteristics of Hypopituitarism There were no significant differences observed in age, height, weight and BMI between individuals with hypopituitarism and their age, male‐matched controls. However, elevated levels of fasting triglycerides, total cholesterol, glucose, insulin and HOMA‐IR were noted in individuals with hypopituitarism compared to the control group (Table ). 3.2 Increased Circulating BCAAs in Hypopituitarism In total, thirteen amino acids and metabolites displayed differential abundances in patients with hypopituitarism. Notably, higher levels of alanine, arginine, glutamate, histidine, leucine, phenylalanine, proline, pyroglutamic acid, tryptophan and valine were observed, while glutamine, lysine and norleucine levels were lower in hypopituitarism compared to those in healthy controls (Table ). Significantly higher levels of valine and leucine were observed in hypopituitarism, with a 1.15‐fold increase ( p < 0.001) for leucine and a 1.57‐fold increase ( p < 0.001) for valine. Valine displayed promising potential as a diagnostic biomarker, with an area under the curve (AUC) of 0.8943 (95% CI = 0.8495–0.9392) (Figure ). 3.3 Increased Circulating BCAAs Significantly Correlated With IR in Hypopituitarism Among the amino acids, valine, leucine and glutamate exhibited positive correlations with biomarkers related to lipid and carbohydrate metabolism (Figure ). Specifically, the concentration of valine was positively correlated with triglycerides, insulin and HOMA‐IR ( r = 0.201, p < 0.05; r = 0.278, p < 0.01; r = 0.265, p < 0.01) and the concentration of leucine was also positively correlated with these biomarkers. Moreover, a positive correlation was observed between glutamate and triglycerides, LDL, glucose, insulin and HOMA‐IR. In the healthy control group, no significant correlation was observed between valine, leucine or glutamate and HOMA‐IR. 3.4 Hepatic Steatosis, CSA and IR in Hypophysectomized Rats Hypophysectomized rats were utilised as the animal model for investigating the effects of rhGH intervention (Figure ). Following hypophysectomy, the rats exhibited a noticeable decrease in appetite and experienced a body weight gain of less than 10% within 2 weeks (Table ). However, the administration of rhGH during the subsequent two‐week period resulted in significant weight recovery (Figure ). The successful extraction of the pituitary gland was also confirmed by serum IGF‐1 levels, which decreased by over 80% in hypophysectomized rats (PR group) and significantly increased in the rhGH group (Figure ). Liver tissue analysis revealed hepatic steatosis in the PR group, which was partly restored after rhGH intervention, as evidenced by Oil Red O staining in liver tissue (Figure ). Morphologically, the muscle cells in the WT group were relatively loosely distributed, while those in the PR and rhGH groups were closely packed, as depicted in Figure . However, the weight and volume of thigh/body significantly decreased in the PR group, as evidence of enhanced proteolysis (Figure ). The mean CSA of muscle cells for the WT group was 1630 ± 319.6 μm 2 , for the PR group was 1548 ± 431.4 μm 2 , and for the rhGH group was 1266 ± 266.1 μm 2 . The WT group had the comparatively largest CSA, while the rhGH group had the comparatively smaller CSA area (Table ). The curve of CSA frequency distribution was shown in Figure . Morphologically, the muscle cells in the WT group were relatively loosely distributed, while those in the PR and rhGH groups were closely packed. In an area of 0.050 mm 2 , the cell count for the WT group was 19.75 ± 0.96, for the PR group was 26.25 ± 2.63, and for the rhGH group was 31.60 ± 2.65 (Figure ). There were significant differences among the three groups. The cell density in the PR and rhGH groups was notably increased. HOMA‐IR was relatively higher in the PR group compared to the rhGH group (Figure ). The fasting insulin concentration was paradoxically highest in the PR group, a condition that was mitigated by rhGH replacement (Figure ). 3.5 Increased BCAAs in Hypophysectomized Rats Circulating amino acids were evaluated at various time points in hypophysectomized rats: prior to surgery (D0), before (D14) and after (D28) rhGH replacement. Results revealed substantial perturbation in the levels of 66.7% (20/30) amino acids or derivatives subsequent to hypophysectomy. During rhGH intervention, only four amino acids, encompassing BCAAs, such as leucine, valine, isoleucine, along with hydroxyproline, exhibited a partial normalisation of their concentrations ( p < 0.05, q value < 0.2) between D14 and D28 (Figure , Table ). Specifically, the concentrations of BCAAs undergo a pronounced elevation subsequent to hypophysectomy, followed by a discernible downward trend post intervention with rhGH, as depicted in Figure . Simultaneously, the PR group exhibited no notable variation across the Day 14 and Day 28 time points, as detailed in Figure . 3.6 Regulation of BCAA Degradation and Ubiquitin‐Dependent Proteolysis in Hypophysectomized Rats To elucidate the mechanism underlying elevation of circulating BCAAs in hypopituitarism, a comprehensive 4D label‐free quantitative phosphoproteomics analysis was performed on liver tissues from WT, PR and rhGH groups. A total of 3771 phosphoproteins were evaluated across the three cohorts. A total of 194 upregulated and 361 downregulated differentially expressed phosphoproteins (DEPPs) were identified in comparison between rhGH and the PR group (Figure ). The KEGG pathways analysis revealed DEPPs between rhGH and the PR group involved in ‘mRNA metabolic process,’ ‘diseases of signal transduction by growth factor receptors and second messengers,’ and ‘valine, leucine, and isoleucine degradation’ (Figure ). In the context of hypophysectomized rats, a total of 12 proteins associated with ‘valine, leucine, and isoleucine degradation pathway’ exhibited significantly perturbed phosphorylation levels. These included BCKDHA‐S362, ALDH9A1‐T28, EHHADH‐S720, HADHA‐T392, HADH‐S73, HMGCL‐S22, HMGCS1‐S476, HMGCS2‐S324, ALDH6A1‐S525, PCCA‐Y152, ACAA2‐S332 and BDH1‐S25 (Figure ). No significant differences were observed for the total levels of these proteins, suggesting that the effects of GH on BCAA degradation in the liver primarily occurred at the phosphorylation level rather than the protein expression level (Table ). Among these proteins, the phosphorylation states of EHHADH‐S720, HADH‐S73, ACAA2‐S332, HMGCS1‐S476, BCKDHA‐S362 and ALDH6A1‐S525 were reversed following rhGH intervention. Specifically, the phosphorylation of BCKDHA at the S362 sites was reduced to 0.07‐fold (Figure ), yet it saw a substantial restoration subsequent to GH intervention. The findings from the western blotting corroborated these observations. (Figure ). Given that BCAAs degradation pathway initiated in the skeletal muscle in a manner dependent on BCAT expression, we noted a significant upregulation of BCAT1 and BCAT2 in the tibialis anterior muscle of hypophysectomized rats (Figure ). MuRF1, an ubiquitin ligase in the ubiquitin‐proteasome‐mediated protein degradation, underwent a significant increase post‐hypophysectomy and was significantly restored subsequent to the administration of rhGH.
Physical and Biological Characteristics of Hypopituitarism There were no significant differences observed in age, height, weight and BMI between individuals with hypopituitarism and their age, male‐matched controls. However, elevated levels of fasting triglycerides, total cholesterol, glucose, insulin and HOMA‐IR were noted in individuals with hypopituitarism compared to the control group (Table ).
Increased Circulating BCAAs in Hypopituitarism In total, thirteen amino acids and metabolites displayed differential abundances in patients with hypopituitarism. Notably, higher levels of alanine, arginine, glutamate, histidine, leucine, phenylalanine, proline, pyroglutamic acid, tryptophan and valine were observed, while glutamine, lysine and norleucine levels were lower in hypopituitarism compared to those in healthy controls (Table ). Significantly higher levels of valine and leucine were observed in hypopituitarism, with a 1.15‐fold increase ( p < 0.001) for leucine and a 1.57‐fold increase ( p < 0.001) for valine. Valine displayed promising potential as a diagnostic biomarker, with an area under the curve (AUC) of 0.8943 (95% CI = 0.8495–0.9392) (Figure ).
Increased Circulating BCAAs Significantly Correlated With IR in Hypopituitarism Among the amino acids, valine, leucine and glutamate exhibited positive correlations with biomarkers related to lipid and carbohydrate metabolism (Figure ). Specifically, the concentration of valine was positively correlated with triglycerides, insulin and HOMA‐IR ( r = 0.201, p < 0.05; r = 0.278, p < 0.01; r = 0.265, p < 0.01) and the concentration of leucine was also positively correlated with these biomarkers. Moreover, a positive correlation was observed between glutamate and triglycerides, LDL, glucose, insulin and HOMA‐IR. In the healthy control group, no significant correlation was observed between valine, leucine or glutamate and HOMA‐IR.
Hepatic Steatosis, CSA and IR in Hypophysectomized Rats Hypophysectomized rats were utilised as the animal model for investigating the effects of rhGH intervention (Figure ). Following hypophysectomy, the rats exhibited a noticeable decrease in appetite and experienced a body weight gain of less than 10% within 2 weeks (Table ). However, the administration of rhGH during the subsequent two‐week period resulted in significant weight recovery (Figure ). The successful extraction of the pituitary gland was also confirmed by serum IGF‐1 levels, which decreased by over 80% in hypophysectomized rats (PR group) and significantly increased in the rhGH group (Figure ). Liver tissue analysis revealed hepatic steatosis in the PR group, which was partly restored after rhGH intervention, as evidenced by Oil Red O staining in liver tissue (Figure ). Morphologically, the muscle cells in the WT group were relatively loosely distributed, while those in the PR and rhGH groups were closely packed, as depicted in Figure . However, the weight and volume of thigh/body significantly decreased in the PR group, as evidence of enhanced proteolysis (Figure ). The mean CSA of muscle cells for the WT group was 1630 ± 319.6 μm 2 , for the PR group was 1548 ± 431.4 μm 2 , and for the rhGH group was 1266 ± 266.1 μm 2 . The WT group had the comparatively largest CSA, while the rhGH group had the comparatively smaller CSA area (Table ). The curve of CSA frequency distribution was shown in Figure . Morphologically, the muscle cells in the WT group were relatively loosely distributed, while those in the PR and rhGH groups were closely packed. In an area of 0.050 mm 2 , the cell count for the WT group was 19.75 ± 0.96, for the PR group was 26.25 ± 2.63, and for the rhGH group was 31.60 ± 2.65 (Figure ). There were significant differences among the three groups. The cell density in the PR and rhGH groups was notably increased. HOMA‐IR was relatively higher in the PR group compared to the rhGH group (Figure ). The fasting insulin concentration was paradoxically highest in the PR group, a condition that was mitigated by rhGH replacement (Figure ).
Increased BCAAs in Hypophysectomized Rats Circulating amino acids were evaluated at various time points in hypophysectomized rats: prior to surgery (D0), before (D14) and after (D28) rhGH replacement. Results revealed substantial perturbation in the levels of 66.7% (20/30) amino acids or derivatives subsequent to hypophysectomy. During rhGH intervention, only four amino acids, encompassing BCAAs, such as leucine, valine, isoleucine, along with hydroxyproline, exhibited a partial normalisation of their concentrations ( p < 0.05, q value < 0.2) between D14 and D28 (Figure , Table ). Specifically, the concentrations of BCAAs undergo a pronounced elevation subsequent to hypophysectomy, followed by a discernible downward trend post intervention with rhGH, as depicted in Figure . Simultaneously, the PR group exhibited no notable variation across the Day 14 and Day 28 time points, as detailed in Figure .
Regulation of BCAA Degradation and Ubiquitin‐Dependent Proteolysis in Hypophysectomized Rats To elucidate the mechanism underlying elevation of circulating BCAAs in hypopituitarism, a comprehensive 4D label‐free quantitative phosphoproteomics analysis was performed on liver tissues from WT, PR and rhGH groups. A total of 3771 phosphoproteins were evaluated across the three cohorts. A total of 194 upregulated and 361 downregulated differentially expressed phosphoproteins (DEPPs) were identified in comparison between rhGH and the PR group (Figure ). The KEGG pathways analysis revealed DEPPs between rhGH and the PR group involved in ‘mRNA metabolic process,’ ‘diseases of signal transduction by growth factor receptors and second messengers,’ and ‘valine, leucine, and isoleucine degradation’ (Figure ). In the context of hypophysectomized rats, a total of 12 proteins associated with ‘valine, leucine, and isoleucine degradation pathway’ exhibited significantly perturbed phosphorylation levels. These included BCKDHA‐S362, ALDH9A1‐T28, EHHADH‐S720, HADHA‐T392, HADH‐S73, HMGCL‐S22, HMGCS1‐S476, HMGCS2‐S324, ALDH6A1‐S525, PCCA‐Y152, ACAA2‐S332 and BDH1‐S25 (Figure ). No significant differences were observed for the total levels of these proteins, suggesting that the effects of GH on BCAA degradation in the liver primarily occurred at the phosphorylation level rather than the protein expression level (Table ). Among these proteins, the phosphorylation states of EHHADH‐S720, HADH‐S73, ACAA2‐S332, HMGCS1‐S476, BCKDHA‐S362 and ALDH6A1‐S525 were reversed following rhGH intervention. Specifically, the phosphorylation of BCKDHA at the S362 sites was reduced to 0.07‐fold (Figure ), yet it saw a substantial restoration subsequent to GH intervention. The findings from the western blotting corroborated these observations. (Figure ). Given that BCAAs degradation pathway initiated in the skeletal muscle in a manner dependent on BCAT expression, we noted a significant upregulation of BCAT1 and BCAT2 in the tibialis anterior muscle of hypophysectomized rats (Figure ). MuRF1, an ubiquitin ligase in the ubiquitin‐proteasome‐mediated protein degradation, underwent a significant increase post‐hypophysectomy and was significantly restored subsequent to the administration of rhGH.
Discussion In human cohorts, BCAA and related metabolites are now widely recognised as among the strongest biomarkers of obesity, IR, T2D and cardiovascular diseases . Within the hypopituitarism cohort, a significant elevation in BCAA levels was observed in patients with hypopituitarism compared to healthy controls. Furthermore, there was a positive correlation identified between BCAA concentrations and both triglycerides and IR levels. However, no definitive conclusions can be drawn regarding the potential causal role of BCAAs in disease pathogenesis, based on current correlative metabolic data alone. IR is characterised by reduced sensitivity or responsiveness to the metabolic actions of insulin, encompassing defects in glucose uptake and oxidation, diminished glycogen synthesis and impaired suppression of lipid oxidation . Felig et al. proposed that the elevated levels of BCAAs and aromatic amino acids observed in individuals with obesity may be a consequence, rather than a cause, of IR . However some evidence suggests that BCAAs may independently contribute to IR. Metabolomic studies have indicated that elevated BCAA levels in individuals with normal fasting glycemia are associated with an increased risk of IR and diabetes . Recent human genetic studies investigating variants affecting insulin sensitivity and lipid traits in relation to BCAA levels have proposed a unifying model. According to this model, increases in BCAA observed in pre‐diabetic individuals with obesity may primarily result from IR. However, once elevated, BCAAs could potentially play a causal role in the progression from prediabetes to full‐blown diabetes . It is plausible that elevated BCAAs may be a consequence of IR, yet they might subsequently exert an influence on lipid and glucose metabolism within the context of hypopituitarism. The precise role of BCAAs in hypopituitarism, whether they are mere consequences, causative factors or mere biomarkers of impaired insulin response, remains to be elucidated. The regulation of circulating BCAAs involves various factors, such as dietary consumption, protein synthesis or oxidation, and the rate of proteolysis and release of free amino acids, while de novo biosynthesis is not observed in human tissues . To mitigate the influence of diet‐derived amino acids, known to cause a notable increase in BCAA levels post‐ingestion of animal protein‐rich meals, we assessed amino acid concentrations under fasting conditions. In the hypophysectomized rat model, which is characterised by a severe loss of appetite and a lack of obvious weight gain during the experiment, the elevation of fasting BCAAs is unlikely to be attributed to an excess of diet‐derived amino acids. On the contrary, the notable reduction in the weight or volume of skeletal muscles, as well as the increased density of muscle fibres, indicates the occurrence of muscle atrophy. The mice with GH deficiency showed smaller muscle fibres and normal muscle function according to Mavalli's research . Due to the remarkable degree of intragroup variability within the PR and rhGH groups, there is no significant difference observed in CSA. The compaction of muscle fibres may be related to muscle atrophy, as the reduced interstitial space can impair the nutritional supply to the muscle fibres, potentially triggering their atrophy. Several studies have noted that a more compact arrangement may be associated with a shift in muscle fibre type towards the more densely packed slow‐twitch fibres . Additionally, the substantial increase in the levels of most amino acids and derivatives in both patients with hypopituitarism and the hypophysectomized rat model corroborates the state of enhanced proteolysis. This is further supported by the heightened expression of MuRF1, a key enzyme in the ubiquitin‐proteasome‐mediated protein degradation pathway. Collectively, the increased concentrations of BCAAs may be indicative of elevated proteolysis during the post‐absorptive state. Holeček et al. have proposed that skeletal muscle plays a dominant role in BCAA catabolism and that activated proteolysis and IR can contribute to the elevation of BCAA levels . The ubiquitin‐proteasome system (UPS) is implicated in the degradation of major skeletal muscle proteins and plays a significant role in muscle wasting. MAFbx and MuRF1, as ubiquitin‐protein ligases, are crucial components of the UPS system . Although there is no direct evidence to suggest that GH regulates the expression of MAFbx and MuRF1, one study has shown that Ghrelin, which restores plasma GH levels in burned rats, can decrease proteolysis by modulating MuRF1 and MAFbx . Furthermore, a significant reduction in circulating BCAA levels was observed during fasting with GH replacement therapy, attributed to diminished proteolysis . In the current study, the expression of MuRF1 was significantly augmented following hypophysectomy, a yet it could be mitigated by rhGH intervention, implying its role as a marker of muscle proteolysis regulated by GH. The application of 4D label‐free quantitative phosphoproteomics has revealed the KEGG pathways that are significantly enriched and distinguish the rhGH group from the PR group. Notably, these include pathways involved in ‘mRNA metabolism,’ ‘diseases of signal transduction by growth factor receptors and second messengers,’ and ‘valine, leucine, and isoleucine degradation.’ The inclusion of ‘diseases of signal transduction by growth factor receptors and second messengers’ is rational, given the two‐week rhGH intervention on hypophysectomized rats. Among the 12 proteins exhibiting dysregulated phosphorylation levels in the ‘valine, leucine, and isoleucine degradation pathway,’ the rate‐limiting enzyme BCKDHA in the liver's BCAA degradation pathway showed significantly dephosphorylation. BCKDHA is the α‐subunit of the E1 component of the BCKDH complex, which is responsible for catalysing the second step critical and irreversible reaction in BCAA degradation. The activity of BCKDHA is regulated by BCKDK kinase and PPM1K phosphatase. BCKDK kinase inhibits the activity of BCKDHA through phosphorylation, while PPM1K activates the BCKDH complex by dephosphorylating BCKDHA . Although the phosphorylation site identified in our proteomic analysis was BCKDHA‐S362, we employed the canonical BCKDHA‐Ser293 antibody during the WB validation and obtained a consistent trend. Therefore, the dephosphorylation state of BCKDHA in PR group is an indication of an enhanced BCAA degradation. BCATs are responsible for the first step in the catabolism of BCAAs, catalysing the reversible transamination reaction between BCAAs and their corresponding α‐keto acids (BCKAs). A decline in BCATs expression may lead to a reduction in BCAA catabolism, thereby affecting the concentration of BCAAs in the serum. Evidence including the deficiency of BCAT2 has been demonstrated to reduce circulating BCAA levels by 30%–50% . BCAT2 enhances BCAA uptake to sustain BCAA catabolism and mitochondrial respiration in the development of pancreatic ductal adenocarcinoma . We found the expression of BCATs increased in the PR group, which may indicate an enhanced BCAA degradation, like BCKDHA dephosphorylation. Collectively, these findings suggest that the BCAA degradation pathway is activated in the context of GH deficiency, a state that appears amenable to correction through rhGH intervention. These results were unexpected, given that as IR‐related diseases, such as obesity, typically entail a broad transcriptional repression of the BCAA degradation pathway. The underlying mechanism behind this phenomenon remains unexplored, especially considering the scarcity of research investigating the interaction between GH and BCAA oxidation. Further investigations are warranted to unravel the intricate details of the relationship between GH and BCAA metabolism in GHD. Despite the promising findings, the present study has notable limitations. Firstly, BCAA metabolism has been shown to be sex‐dependent in previous studies , which may limit the generalizability of our findings since only male patients and animal models were included in this study. Secondly, we did not collect data on patients' daily protein intake and activity levels, which can also have an impact on their BCAA metabolism. Thirdly, hypophysectomized rats were chosen as the animal model for hypopituitarism, rather than employing GH receptor knockout transgenic rats. The choice of hypophysectomized rats enabled the demonstration of GH's compensatory effect; however, it also hindered the exploration of potential effects from other pituitary hormones, as no hormone replacement therapies were provided to the rats. Fourthly, the analysis of muscle fibre type and functional study has not been conducted. However, such an analysis is only applicable to fresh samples. This limitation is primarily due to methodological issues, as such analyses are only applicable to fresh samples. Our samples have already been fixed and embedded in paraffin, thus missing the optimal time for detection. To compensate for this missing content, we assessed the cellular arrangement density, as there are certain differences in the arrangement of different fibre types. Generally, slow‐twitch fibres (Type I) have a higher density and more compact arrangement compared to fast‐twitch fibres (Type II). Future work will focus on addressing these methodological limitations by conducting experiments on fresh samples to analyse muscle fibre types and their functional characteristics. Additionally, without data on other comorbidities that patients may have, which can also alter BCAA metabolism. Furthermore, the patients enrolled in our study did not undergo rhGH therapy in adulthood due to economic constraints or concerns related to tumour recurrence, cancer and diabetes risks. Consequently, we were unable to observe the impact of GH intervention on BCAA levels in patients with hypopituitarism. In conclusion, our study establishes a clear association between elevated circulating BCAAs and IR in hypopituitarism. The activation of the BCAA degradation pathway in the state of GHD suggests a complex interplay between GH and BCAA metabolism. The observed increase in fasting BCAAs likely stems from augmented proteolysis and IR, contributing to elevated BCAA levels in the bloodstream that exceed their degradation and utilisation (Figure ). While these findings provide valuable insights into the impact of GH on BCAA catabolism, the study's limitations underscore the necessity for additional investigations to refine our comprehension and potentially contribute to the creation of innovative therapeutic strategies.
Yuwen Zhang: conceptualization (equal), formal analysis (equal), funding acquisition (equal), investigation (equal), resources (equal), validation (equal), writing – original draft (equal), writing – review and editing (equal). Zhiqiu Ye: data curation (equal), methodology (equal), resources (equal), validation (equal), writing – review and editing (equal). Enfei Xiang: methodology (equal), writing – review and editing (equal). Peizhan Chen: conceptualization (equal), funding acquisition (equal), investigation (equal), methodology (equal), project administration (equal), supervision (equal), writing – review and editing (equal). Xuqian Fang: conceptualization (equal), formal analysis (equal), funding acquisition (equal), investigation (equal), methodology (equal), project administration (equal), supervision (equal), writing – original draft (equal), writing – review and editing (equal).
The authors declare no conflicts of interest.
Data S1. Data S2.
|
Evaluation of the undergraduate family medicine programme of Faculty of Medicine, University of Kelaniya: quantitative and qualitative student feedback | 4e7a6ac3-8651-4cfa-8df5-a03cd58e2f12 | 6889184 | Family Medicine[mh] | Family medicine is the discipline that is geared towards provision of high quality health care based on the principles of first contact, comprehensive, coordinated, personalised care and preventive and health promotive activities. It is the only specialty that provides care to the whole family. Globally, it is now widely recognised that a disease oriented approach is becoming increasingly dysfunctional and that it must be replaced by a focus on people and populations with their unique combinations of illnesses rather than specific diseases . With the increasing emphasis on the importance of primary care the ministry of health Sri Lanka issued a directive in 2016 that the training of doctors in primary care should be strengthened . Medical students of the Faculty of Medicine, University of Kelaniya follow a 1 month long clinical appointment in family medicine at the University Family Practice Centre in their fourth year of study. Teaching is conducted through a variety of teaching and learning methods. Students engage in traditional patient clerking, observe day to day activities of the clinic, manage the medical records system and also learn clinical examination techniques. They are given the opportunity to conduct consultations themselves and are given one to one feedback on their consultations. Small group discussions are conducted based on common reasons for encounter in a family practice. Students visit a general practice (GP) clinic for three teaching sessions and have one visit to the outpatient department of the Colombo North teaching hospital. At the end of the appointment students participate in a seminar and debate. They present data on the spectrum of morbidity encountered during their visits to the GP. They also formulate a proposed layout for an ideal GP clinic. The assessment at the end of the appointment consists of two structured essay questions based on clinical cases and principles of family medicine. At the end of the appointment quantitative and qualitative feedback is collected from students. Many evaluations of undergraduate family medicine clerkships have been conducted in other countries. Most of them use a structured self-administered questionnaire to gather data [ – ]. A questionnaire based student feedback in an Austrian undergraduate setting found that students viewed the family medicine clerkship as an essential aspect of their education and were highly satisfied with the appointment . In a questionnaire based evaluation by students at the King Saud University, College of Medicine, Saudi Arabia students appreciated learning about the caring and communication aspects of patient care. The study showed that practical procedural skills are desirable features of a preceptorship programme and that an emphasis on doing versus observing is preferred by students . Written evaluations of the fourth-year medical student attachment in general practice were obtained from 75 medical students at the University of Dundee to determine the strengths and weaknesses of the teaching programme. Interviews were also conducted with students and their tutors and a focus group was arranged at the conclusion of the attachment. The overall evaluation by the students was positive. Students liked the opportunity for the hands-on practice of medicine and the collegial reception from their tutors. Major criticisms related to the lack of adequate opportunities for some students to see patients on their own and to learn practical procedures . Publications on undergraduate student feedback on teaching and learning in Sri Lanka are scarce. In an evaluation of the teaching approaches used in the biochemistry course for second year medical students of the Rajarata University two questions were administered to students who completed the second MBBS Objective Structured Practical Examination (OSPE) in Biochemistry. The first question was a fixed response question whilst the second was a free response question. Lectures were the most popular method of teaching while other preferred methods were student staff interaction, panel discussion and the least preferred method was seminar . A previous survey was done in the same setting as this study in 2014 using a pretested self-administered structured questionnaire with space at the end for open ended comments. The questionnaire was administered to six consecutive clinical groups at the end of the 1 month clinical appointment. This survey showed that direct observation of student consultations and feedback from teachers was the most popular teaching method among students while need to strengthen hands on learning methods such procedural skills and clinical examination techniques was emphasised . Since 2016 student feedback has routinely been obtained using the same structured questionnaire that was used in the previous evaluation in this setting to gather data from half the students in each group. A qualitative round robin data gathering method is used to gather data from the other half of the students in each group. The aim of this study was to gain a comprehensive view of student perceptions of the family medicine appointment.
Data were gathered from 185 (98%) students from all eight clinical groups throughout the year 2016. Feedback was taken at the end of the clinical appointment from each group. Method I – questionnaire During the feedback activity each group is divided into two according to the register. Half of the students fill in the pre-tested structured feedback questionnaire that consists of questions with responses based on a Likert scale with a space for free comments as well. Additional file shows the structured questionnaire. Method II – round Robin activity The other half of each group provides qualitative feedback using a Round Robin method of brainstorming. During this activity each student is asked to write a free non prompted comment regarding the appointment and pass his feedback around the table to the next student who can either add another comment or indicate agreement (with a tick) or disagreement (with a cross) to the idea expressed by the initial student. This addition of comments and ideas continues till no more new comments are added and the point of saturation is reached. The time taken for this process is approximately 45 min for each group. Variations of this method have been used in many settings to get programme evaluation feedback and suggestions for improvement from students as it provides semi quantitative data in a way that actively engages students while allowing equal opportunity for all students to share their views [ – ]. Data analysis Quantitative data analysis was done using SPSS version 22. Qualitative data from the round table method and qualitative data from the free comments in the questionnaire were analysed separately. Thematic analysis was used to identify, analyse and report patterns within the qualitative data . An inductive data driven method was used where three researchers read and re read the data and coded the data independently. The codes were categorised into themes that were further refined and validated by extensive discussion among the researchers. The team of researchers had varied medical education backgrounds and were at different levels in their careers. The quantitative and qualitative data were scrutinised for convergence, complementarity or dissonance . The numbers of students agreeing to a specific comment or disagreeing to a specific comment in the round robin group were taken into account in developing themes. However, views that were not supported by a large number of people but were considered important and reflective of diverse student experience were given thoughtful attention .
During the feedback activity each group is divided into two according to the register. Half of the students fill in the pre-tested structured feedback questionnaire that consists of questions with responses based on a Likert scale with a space for free comments as well. Additional file shows the structured questionnaire.
The other half of each group provides qualitative feedback using a Round Robin method of brainstorming. During this activity each student is asked to write a free non prompted comment regarding the appointment and pass his feedback around the table to the next student who can either add another comment or indicate agreement (with a tick) or disagreement (with a cross) to the idea expressed by the initial student. This addition of comments and ideas continues till no more new comments are added and the point of saturation is reached. The time taken for this process is approximately 45 min for each group. Variations of this method have been used in many settings to get programme evaluation feedback and suggestions for improvement from students as it provides semi quantitative data in a way that actively engages students while allowing equal opportunity for all students to share their views [ – ].
Quantitative data analysis was done using SPSS version 22. Qualitative data from the round table method and qualitative data from the free comments in the questionnaire were analysed separately. Thematic analysis was used to identify, analyse and report patterns within the qualitative data . An inductive data driven method was used where three researchers read and re read the data and coded the data independently. The codes were categorised into themes that were further refined and validated by extensive discussion among the researchers. The team of researchers had varied medical education backgrounds and were at different levels in their careers. The quantitative and qualitative data were scrutinised for convergence, complementarity or dissonance . The numbers of students agreeing to a specific comment or disagreeing to a specific comment in the round robin group were taken into account in developing themes. However, views that were not supported by a large number of people but were considered important and reflective of diverse student experience were given thoughtful attention .
Quantitative results from the questionnaire Students were given the option to agree or disagree to statements evaluating the usefulness of the various aspects of the appointment on a Likert scale. Students had provided agree and strongly agree answers to most of the statements. Students stated that they had a clear idea of the learning outcomes for the appointment. They stated that they had gained an adequate understanding of the basic concepts of family medicine and organisational aspects of a family practice. They were satisfied with the opportunity they got to improve communication skills, history taking, problem solving and presentation skills. However only 47% of students had agreed to the fact that they had got the opportunity to develop skills in clinical examination. Student responses to the question stating that they had acquired a basic knowledge of common diseases were ambivalent with this question not being answered by 54%. The teaching methods they were most appreciative of were learning from patients followed by the debate and performing consultations under observation and receiving feedback. Table describes the quantitative findings. Qualitative results of the free comments from the questionnaire Students were given space in the questionnaire to write free comments on what was good and what was in need of improvement regarding teaching and learning during the appointment. 80% of students had written at least one comment. It was noted that the themes that arose were similar to that of the qualitative feedback only group. The number of themes were lesser than from the round table method. Two main ideas were stressed regarding the need for further emphasis on clinically oriented teaching focused on primary care management rather than hospital management and inadequate availability of facilities such as space for patient examination and adequate equipment for patient examination. Qualitative results from the round table activity Teaching methods Students rated being able to conduct consultations independently and then receiving one to one feedback highly. “Giving opportunity to consult a patient in front of a doctor is good thing for us to improve our consultation skills.” The majority disliked didactic lecture style teaching and preferred case based interactive discussions. They mentioned that they would have appreciated more time for discussion of the patients they had seen in the clinic. The end of appointment debate was also one of the learning opportunities that was valued by most students. Students recognised that it gave an opportunity to engage “students who did not usually participate.” However, many students requested that a new topic for debate should be given to each group without repeating the same topic for each group. Interestingly, although many students complained about the distance they had to travel to visit the GP practices “tendency to get RTA (road traffic accident)” a larger number appreciated the experience claiming that it gave them the chance to observe an “authentic GP setting” and see the “GP approach to patients.” Students described the GP trainers as “friendly” and “enthusiastic in their teaching.” Students also enjoyed the home visits and requested more exposure to home visits. There was the general view that there should be more opportunity for practical hands on work at the clinic. Students said they would have liked more exposure to procedures such as wound care, nebulisation etc. They also mentioned that they had not had enough opportunities to practice examination skills or observe teacher demonstration of examination techniques. Impact on knowledge, skills, attitudes and future practice Students believed that common topics likely to be encountered during general practice were covered during the appointment and their prior learning was refreshed. It was stated that the topics were “appropriate”, “pitched at the correct level” and “covered all common diseases”. Students requested that more emphasis should be given to teaching the management aspect. Students suggested that the clinic should have a pharmacy stocked with the common medications used in a family practice to have an opportunity for learning about medicines. Having the opportunity to register patients, retrieve medical records and get first-hand experience as a practice manager was highly appreciated. “Your method helped us to do it ourselves and we will remember the idea for a life time.” “Doing a lecture on medical records would have been boring.” The appointment seemed to have had a positive impact on student communication skills. Students said that they were able to learn how to build a good rapport with the patient and how to improve their communication skills. It was a surprising finding that students revealed that they were hesitant to take time from the patients’ visit for learning purposes. “Cannot trouble patients in this setting as we do in the hospital.” They were sensitive to the fact that patients were kept longer when students were in the clinic. “Patients are kept waiting for a long time (when teaching is being carried out).” The appointment had kindled an interest regarding the field of family medicine in many. It was stated that the experience has given them a clear view of how to establish their own practice in future. One student said “your effort to make family medicine a stream that many students will pursue was not in vain. I feel that it’s a good stream for a doctor to have less stress and much satisfaction.” “Seeing how the patients benefit from the consultations, counselling made the field seem much more interesting.” Staff and learning environment The fact that the learning environment was student friendly and free of stress was highly appreciated. “Staff in the family medicine department has this different vibe”. “Feels like a family-well, may be that suits its’ name”. “Very enthusiastic and not stressful which facilitates learning.” Students valued the authenticity of the setting both at the university clinic and the GP clinics. It was said that it helped them in “getting an idea about how general practice is different from the clinical experience of hospital only to which we’ve been exposed so far.” During the appointment students learn various skills from the practice nurse, lab technician and administration staff regarding patient care, lab investigations and day to day clinic management. Students appreciated the support given to them by all categories of staff. “All staff; professors, doctors, demonstrators, clerk and lab technician put great effort and conducted the appointment in a professional way”. Students complained that there was inadequate space for them to take histories and examine patients in the clinic.
Students were given the option to agree or disagree to statements evaluating the usefulness of the various aspects of the appointment on a Likert scale. Students had provided agree and strongly agree answers to most of the statements. Students stated that they had a clear idea of the learning outcomes for the appointment. They stated that they had gained an adequate understanding of the basic concepts of family medicine and organisational aspects of a family practice. They were satisfied with the opportunity they got to improve communication skills, history taking, problem solving and presentation skills. However only 47% of students had agreed to the fact that they had got the opportunity to develop skills in clinical examination. Student responses to the question stating that they had acquired a basic knowledge of common diseases were ambivalent with this question not being answered by 54%. The teaching methods they were most appreciative of were learning from patients followed by the debate and performing consultations under observation and receiving feedback. Table describes the quantitative findings.
Students were given space in the questionnaire to write free comments on what was good and what was in need of improvement regarding teaching and learning during the appointment. 80% of students had written at least one comment. It was noted that the themes that arose were similar to that of the qualitative feedback only group. The number of themes were lesser than from the round table method. Two main ideas were stressed regarding the need for further emphasis on clinically oriented teaching focused on primary care management rather than hospital management and inadequate availability of facilities such as space for patient examination and adequate equipment for patient examination.
Teaching methods Students rated being able to conduct consultations independently and then receiving one to one feedback highly. “Giving opportunity to consult a patient in front of a doctor is good thing for us to improve our consultation skills.” The majority disliked didactic lecture style teaching and preferred case based interactive discussions. They mentioned that they would have appreciated more time for discussion of the patients they had seen in the clinic. The end of appointment debate was also one of the learning opportunities that was valued by most students. Students recognised that it gave an opportunity to engage “students who did not usually participate.” However, many students requested that a new topic for debate should be given to each group without repeating the same topic for each group. Interestingly, although many students complained about the distance they had to travel to visit the GP practices “tendency to get RTA (road traffic accident)” a larger number appreciated the experience claiming that it gave them the chance to observe an “authentic GP setting” and see the “GP approach to patients.” Students described the GP trainers as “friendly” and “enthusiastic in their teaching.” Students also enjoyed the home visits and requested more exposure to home visits. There was the general view that there should be more opportunity for practical hands on work at the clinic. Students said they would have liked more exposure to procedures such as wound care, nebulisation etc. They also mentioned that they had not had enough opportunities to practice examination skills or observe teacher demonstration of examination techniques. Impact on knowledge, skills, attitudes and future practice Students believed that common topics likely to be encountered during general practice were covered during the appointment and their prior learning was refreshed. It was stated that the topics were “appropriate”, “pitched at the correct level” and “covered all common diseases”. Students requested that more emphasis should be given to teaching the management aspect. Students suggested that the clinic should have a pharmacy stocked with the common medications used in a family practice to have an opportunity for learning about medicines. Having the opportunity to register patients, retrieve medical records and get first-hand experience as a practice manager was highly appreciated. “Your method helped us to do it ourselves and we will remember the idea for a life time.” “Doing a lecture on medical records would have been boring.” The appointment seemed to have had a positive impact on student communication skills. Students said that they were able to learn how to build a good rapport with the patient and how to improve their communication skills. It was a surprising finding that students revealed that they were hesitant to take time from the patients’ visit for learning purposes. “Cannot trouble patients in this setting as we do in the hospital.” They were sensitive to the fact that patients were kept longer when students were in the clinic. “Patients are kept waiting for a long time (when teaching is being carried out).” The appointment had kindled an interest regarding the field of family medicine in many. It was stated that the experience has given them a clear view of how to establish their own practice in future. One student said “your effort to make family medicine a stream that many students will pursue was not in vain. I feel that it’s a good stream for a doctor to have less stress and much satisfaction.” “Seeing how the patients benefit from the consultations, counselling made the field seem much more interesting.” Staff and learning environment The fact that the learning environment was student friendly and free of stress was highly appreciated. “Staff in the family medicine department has this different vibe”. “Feels like a family-well, may be that suits its’ name”. “Very enthusiastic and not stressful which facilitates learning.” Students valued the authenticity of the setting both at the university clinic and the GP clinics. It was said that it helped them in “getting an idea about how general practice is different from the clinical experience of hospital only to which we’ve been exposed so far.” During the appointment students learn various skills from the practice nurse, lab technician and administration staff regarding patient care, lab investigations and day to day clinic management. Students appreciated the support given to them by all categories of staff. “All staff; professors, doctors, demonstrators, clerk and lab technician put great effort and conducted the appointment in a professional way”. Students complained that there was inadequate space for them to take histories and examine patients in the clinic.
Students rated being able to conduct consultations independently and then receiving one to one feedback highly. “Giving opportunity to consult a patient in front of a doctor is good thing for us to improve our consultation skills.” The majority disliked didactic lecture style teaching and preferred case based interactive discussions. They mentioned that they would have appreciated more time for discussion of the patients they had seen in the clinic. The end of appointment debate was also one of the learning opportunities that was valued by most students. Students recognised that it gave an opportunity to engage “students who did not usually participate.” However, many students requested that a new topic for debate should be given to each group without repeating the same topic for each group. Interestingly, although many students complained about the distance they had to travel to visit the GP practices “tendency to get RTA (road traffic accident)” a larger number appreciated the experience claiming that it gave them the chance to observe an “authentic GP setting” and see the “GP approach to patients.” Students described the GP trainers as “friendly” and “enthusiastic in their teaching.” Students also enjoyed the home visits and requested more exposure to home visits. There was the general view that there should be more opportunity for practical hands on work at the clinic. Students said they would have liked more exposure to procedures such as wound care, nebulisation etc. They also mentioned that they had not had enough opportunities to practice examination skills or observe teacher demonstration of examination techniques.
Students believed that common topics likely to be encountered during general practice were covered during the appointment and their prior learning was refreshed. It was stated that the topics were “appropriate”, “pitched at the correct level” and “covered all common diseases”. Students requested that more emphasis should be given to teaching the management aspect. Students suggested that the clinic should have a pharmacy stocked with the common medications used in a family practice to have an opportunity for learning about medicines. Having the opportunity to register patients, retrieve medical records and get first-hand experience as a practice manager was highly appreciated. “Your method helped us to do it ourselves and we will remember the idea for a life time.” “Doing a lecture on medical records would have been boring.” The appointment seemed to have had a positive impact on student communication skills. Students said that they were able to learn how to build a good rapport with the patient and how to improve their communication skills. It was a surprising finding that students revealed that they were hesitant to take time from the patients’ visit for learning purposes. “Cannot trouble patients in this setting as we do in the hospital.” They were sensitive to the fact that patients were kept longer when students were in the clinic. “Patients are kept waiting for a long time (when teaching is being carried out).” The appointment had kindled an interest regarding the field of family medicine in many. It was stated that the experience has given them a clear view of how to establish their own practice in future. One student said “your effort to make family medicine a stream that many students will pursue was not in vain. I feel that it’s a good stream for a doctor to have less stress and much satisfaction.” “Seeing how the patients benefit from the consultations, counselling made the field seem much more interesting.”
The fact that the learning environment was student friendly and free of stress was highly appreciated. “Staff in the family medicine department has this different vibe”. “Feels like a family-well, may be that suits its’ name”. “Very enthusiastic and not stressful which facilitates learning.” Students valued the authenticity of the setting both at the university clinic and the GP clinics. It was said that it helped them in “getting an idea about how general practice is different from the clinical experience of hospital only to which we’ve been exposed so far.” During the appointment students learn various skills from the practice nurse, lab technician and administration staff regarding patient care, lab investigations and day to day clinic management. Students appreciated the support given to them by all categories of staff. “All staff; professors, doctors, demonstrators, clerk and lab technician put great effort and conducted the appointment in a professional way”. Students complained that there was inadequate space for them to take histories and examine patients in the clinic.
A majority of studies evaluating undergraduate family medicine clerkships use a quantitative methodology of students filling a questionnaire based on a likert scale [ – ]. When formulated in a systematic manner this method of evaluation has been found to be valid and useful . However, the value of qualitative feedback for programme evaluation is being increasingly reinforced both in health education and wider fields of teaching [ , , ]. It was thought worthwhile to reflect on the differences in the two types of data collected. In this evaluation student free comments from the round table discussion appeared to be well thought out and specific details and examples had been given explaining what worked and what did not. The qualitative feedback provided a richer and in-depth overview of student ideas on the appointment that are more useful with regard to implementation of future changes in teaching and learning. The number of students who had agreed with or disagreed with a specific statement in the round robin group helped give an idea of how widely and strongly a specific opinion was held within the group. The qualitative findings also facilitated the revelation of certain unexpected and intangible findings related to attitudes and behaviour that could not have been gauged from a structured questionnaire with pre determined questions. Data from the questionnaire were mostly complementary and converged with the qualitative findings. However, some limitations of using a structured questionnaire in programme evaluation are highlighted. The majority of statements in the questionnaire received agree and strongly agree answers which questions the quality of the data. Furthermore, there was a 54% non response to the question on whether students acquired a basic knowledge of common illnesses seen in family practice and their management. This illustrates the ambiguity and difficulty in interpreting numbers with regard to direction for improvement when there is significant non response to an item and the reason for non response is not known. In this study students emphasised that teaching should be more clinically oriented with more opportunity for practical hands on learning in a reflection of their adult learner identity. In a reflection of a desire for learning to be of relevance to assessment and practice they mentioned that they would have valued more exposure to conducting procedures, patient examination and clinically oriented teaching focused on management. These findings closely follow findings of similar studies done in undergraduate family medicine clinical teaching settings which emphasise that student prefer active and hands on learning styles [ , , ]. The fact that students felt that they could not “trouble” the patients at the family practice in taking histories, examining them and therefore taking extra time from the patients’ visit was an interesting finding and a subtle indicator that perhaps students had experienced an understanding of the difference in the clinical environment at a family practice where patients have more autonomy in comparison to in ward patients. It could be hypothesised that exposure to a first contact ambulatory primary care environment had an impact on student attitudes in line with patient centred care. Some previous studies in general practice teaching settings have found that student participation during consultations increases consultation time and raises issues of confidentiality. Despite this finding studies also show that patients are mostly happy for students to be present during consultations with their GP. While students in this study may have felt they were wasting the patient’s time and their presence did not add anything to the consultation previous studies show that patients felt that they benefitted from the presence of a student as they were able to know more details about the illness from students and students helped in revealing details to the doctor [ – ]. In our setting students take the patient’s history before coming into the consultation room, and then present the history to the teaching doctor in front of the patient. This method has been shown to be more time efficient than the student presenting histories to the teacher separately but still results in an increase in the consultation time . Students also perform at least one complete consultation from start to finish under the observation of the supervisor with supervisor intervention when and where appropriate. This method increases the time for teaching even more. Despite the increasing recognition of the need to strengthen primary care family medicine continues to commonly be a default career option . Recent Ministry of health plans for primary health care reforms in Sri Lanka acknowledge that the principles of family medicine need to be integrated into health training so that attitudes and practices of primary level personnel are adapted to this approach . Studies indicate that exposure to an undergraduate family medicine clinical appointment has a positive impact on student attitudes towards family medicine . In this current evaluation it was evident that student attitudes towards family practice had improved at the end of the appointment. In order to motivate more students to actively pursue a career in family medicine it is imperative that undergraduate teaching and training in family medicine should be carefully planned out to highlight the positive aspects of a career in family medicine. The programme should be perceived as a valuable part of undergraduate medical education. Students highlighted the importance of a conducive learning environment in facilitation of learning. While students were unhappy about the lack of adequate space and equipment necessary for patient examination the attitudes of the teaching and support staff involved in the family medicine programme seem to have had a positive impact on student learning. The structure of the family medicine programme facilitates positive interaction with a team of individuals from different professions and staff categories. Students highlighted the support given to them by the whole team as an important motivator of learning. The findings of this evaluation led to changes being made to the programme. The small group discussions were planned to be more clinically oriented and the number of sessions were reduced to allow for more time for interaction with and learning from patients facilitated by teachers within the consultation room. Students were allocated more space to talk to patients and examine them and new topics were introduced for the debate. Limitations Half of the students in each clinical group were allocated to give feedback using the questionnaire and the other half participated in the Round Robin qualitative feedback activity. In order to allow for more robust comparison of qualitative and quantitative feedback it may have been better to request the whole student group to fill in the questionnaire followed by recruitment of a sample of the same group to participate in the qualitative feedback activity.
Half of the students in each clinical group were allocated to give feedback using the questionnaire and the other half participated in the Round Robin qualitative feedback activity. In order to allow for more robust comparison of qualitative and quantitative feedback it may have been better to request the whole student group to fill in the questionnaire followed by recruitment of a sample of the same group to participate in the qualitative feedback activity.
Regular evaluation of teaching programmes helps maintain accountability of faculty and paves the way for more student centred teaching through the incorporation of students’ views in devising teaching methods. This evaluation found that qualitative feedback provided more descriptive material to reflect on and therefore improve teaching on the programme. It is recommended that more use should be made of qualitative methodologies in programme evaluations.
Additional file 1. Questionnaire for evaluation of the teaching and learning on the undergraduate family medicine appointment at the Faculty of Medicine, University of Kelaniya.
|
Considerations for Providing Pediatric Gender-Affirmative Care During the COVID-19 Pandemic | bcf923ef-185a-427d-958d-1c0060409c0b | 7489217 | Pediatrics[mh] | Gender diverse youth face unique adversities during the COVID-19 pandemic. It has been well established that gender diverse youth face disproportionate mental health disparities, such as higher rates of depression and suicidality yet also demonstrate remarkable resiliency . Many professionals who work with this community have a growing concern that there will be an increase in these health disparities as the COVID-19 pandemic continues. Some gender diverse youth face the additional challenge of being at home with family members who are unsupportive and nonaffirming of their gender identities and may experience daily microaggressions and overt aggressions. Providers can guide family members who are struggling with acceptance of their gender diverse youth to become more supportive and affirming such as encouraging families to use a youth's chosen name and pronouns as consistently as possible . Parental support has been demonstrated to reduce rates of negative health outcomes and prevent against suicidality . Additionally, mental health providers have the opportunity to create new services, such as telemedicine group psychotherapy experiences, to provide supportive experiences to gender diverse youth.
Chest binding The use of chest binders in transmasculine and nonbinary youth has been documented to improve gender dysphoria and may also continue to help affirm gender diverse youth who may have delayed top surgeries . However, providers should consider a harm reduction approach to minimize or discontinue chest binder use, especially during suspected or confirmed COVID-19 infection, which could result in worsened respiratory symptoms. Pubertal blockade The ability to perform clinical and/or biochemical assessments of pubertal onset may be affected by the safe ability to perform in-person visits and/or laboratory testing and, thus, is individualized based on local disease prevalence, federal and state mandates, and local hospital policies. An estimated growth velocity, especially for assigned females, may be supportive of pubertal onset as well as early morning luteinizing hormone level and sex steroid levels . A baseline bone age x-ray and/or bone density (e.g., DXA scan), as recommended by current clinical practice guidelines, may be deferred and could still be obtained likely up to six months after treatment initiation and still be considered a baseline . While current clinical practice guidelines recommend long-acting GnRH agonists, in clinical settings, where surgical procedures for Histrelin implants have been suspended, injectable forms of GnRH agonist such as use of Leuprolide Acetate (monthly or every 3, 4, or 6 months), or Triptorelin (every 6 months) can be considered . In some cases, alternatively, use of progestin (e.g., medroxyprogesterone) to lead to pubertal suppression could be considered although are less effective than GnRH agonists . Gender-affirming hormones Similar to pubertal suppression, implementation of gender-affirming hormones will be individualized to local policies. For patients desiring injectable hormones, consideration should be made for a virtual visit to supervise the patient through their first injection and/or use of publicly available injection administration videos. In some cases, the use of topical testosterone could be considered for transmasculine patients; however, caution should be exercised in some clinical scenarios due to the potential to unintentionally masculinize cisgender females who may have close contact . Adjunctive therapies, as recommended by current clinical practice guidelines, such as progestins for menstrual suppression or spironolactone for facial hair growth, can be offered if gender-affirming hormones are not being actively prescribed by programs . Fertility preservation While international reproductive medicine organizations originally recommended limiting fertility services, such as to oncologic cases, due to the pandemic, their recommendations continue to evolve . The majority of fertility preservation counseling, as these services have low rates of utilization by transgender youth, is easily rendered in virtual visits including discussion of processes involved to undergo fertility preservation (e.g., ovulation induction for oocyte cryopreservation or sperm banking) . Additionally, gender diverse youth and their guardians should be reminded that fertility preservation does not have to occur before gender-affirming hormone therapy and studies have documented both successful pregnancy and oocyte retrieval after discontinuation of testosterone, and sperm retrieval after discontinuation of estrogen . Even during the pandemic, fertility preservation is an important part of counseling before medical transition.
The use of chest binders in transmasculine and nonbinary youth has been documented to improve gender dysphoria and may also continue to help affirm gender diverse youth who may have delayed top surgeries . However, providers should consider a harm reduction approach to minimize or discontinue chest binder use, especially during suspected or confirmed COVID-19 infection, which could result in worsened respiratory symptoms.
The ability to perform clinical and/or biochemical assessments of pubertal onset may be affected by the safe ability to perform in-person visits and/or laboratory testing and, thus, is individualized based on local disease prevalence, federal and state mandates, and local hospital policies. An estimated growth velocity, especially for assigned females, may be supportive of pubertal onset as well as early morning luteinizing hormone level and sex steroid levels . A baseline bone age x-ray and/or bone density (e.g., DXA scan), as recommended by current clinical practice guidelines, may be deferred and could still be obtained likely up to six months after treatment initiation and still be considered a baseline . While current clinical practice guidelines recommend long-acting GnRH agonists, in clinical settings, where surgical procedures for Histrelin implants have been suspended, injectable forms of GnRH agonist such as use of Leuprolide Acetate (monthly or every 3, 4, or 6 months), or Triptorelin (every 6 months) can be considered . In some cases, alternatively, use of progestin (e.g., medroxyprogesterone) to lead to pubertal suppression could be considered although are less effective than GnRH agonists .
Similar to pubertal suppression, implementation of gender-affirming hormones will be individualized to local policies. For patients desiring injectable hormones, consideration should be made for a virtual visit to supervise the patient through their first injection and/or use of publicly available injection administration videos. In some cases, the use of topical testosterone could be considered for transmasculine patients; however, caution should be exercised in some clinical scenarios due to the potential to unintentionally masculinize cisgender females who may have close contact . Adjunctive therapies, as recommended by current clinical practice guidelines, such as progestins for menstrual suppression or spironolactone for facial hair growth, can be offered if gender-affirming hormones are not being actively prescribed by programs .
While international reproductive medicine organizations originally recommended limiting fertility services, such as to oncologic cases, due to the pandemic, their recommendations continue to evolve . The majority of fertility preservation counseling, as these services have low rates of utilization by transgender youth, is easily rendered in virtual visits including discussion of processes involved to undergo fertility preservation (e.g., ovulation induction for oocyte cryopreservation or sperm banking) . Additionally, gender diverse youth and their guardians should be reminded that fertility preservation does not have to occur before gender-affirming hormone therapy and studies have documented both successful pregnancy and oocyte retrieval after discontinuation of testosterone, and sperm retrieval after discontinuation of estrogen . Even during the pandemic, fertility preservation is an important part of counseling before medical transition.
There are various factors that may influence when gender-affirmative surgery is able to be performed as surgical centers return to higher capacity. First, surgeries require resources (e.g., personal protective equipment) which are needed for the care of COVID-19 patients. Furthermore, surgeries will have different requirements regarding postoperative inpatient care which may impact reinitiating certain procedures (e.g., outpatient surgeries may be prioritized, thus allowing for mastectomies and hysterectomies, but not phalloplasty). Finally, a person's COVID-19 status, such as active or past infection, is a new surgical consideration. There are also considerations unique to gender-affirming surgeries. Surgical delays can be additional stressors for gender diverse youth who have often waited a number of years for these procedures. Some youth and young adults have overcome geographic, legal, financial, or insurance barriers to be able to access surgery, as well as being placed on long surgical waitlists. Those who are experiencing greater stress and/or dysphoria, or who may have higher risk of resurgence of those barriers, might benefit from greater surgical prioritization. Regardless of how surgeries are phased in, there are many ways in which clinicians can help support patients experiencing surgical delays. Surgeons can support their patients by maintaining open and honest communication regarding the status of surgeries and validating the experiences of their patients, such as by acknowledging the distress that surgical delay may cause. It is important to avoid the use of terms such as “elective,”“cosmetic” or “non-essential.” Both surgeons and nonsurgical clinicians can remind patients that gender-affirming surgeries are medically necessary and that they will advocate for their prioritization..
It is of the utmost importance that all providers, across disciplines, stress that GAC is “essential,” even during a pandemic. Unfortunately, many gender diverse youth have experienced delays in medically necessary treatment (e.g., pubertal suppression, hormone therapy, surgeries, etc.) and potentially have received the unintended, yet damaging message, that GAC is not a priority; therefore, it is important that multidisciplinary professionals working with gender diverse youth and their families counter this narrative. Facilitating open and appropriate communication about the efforts being taken to provide GAC is critical so that gender diverse patients and their families know that their health care is valued and medically necessary.
|
Laparoscopic management of infantile hydrocele in pediatric age group | 4e90b702-d25e-4d64-9cc2-62701adcc3ae | 8913565 | Pediatrics[mh] | Infantile hydrocele is an abnormal collection of fluid along the course of the processus vaginalis due to incomplete obliteration. The occurrence of infantile hydrocele is related to the descent of the testis, as it passes through the internal ring, it pulls along a diverticulum of peritoneum on its anteromedial surface referred to as “the processus vaginalis” . Persistent patent processus vaginalis (PPV) is a common cause of hydrocele in children and explains approximately 60% of the cases in infants. So, closure of the PPV may be the most effective in preventing the recurrence . Traditional open repair entails performing an inguinal incision, dissecting the inguinal canal, high ligation of the PPV, and draining the fluid or window created in the tunica vaginalis . However, Laparoscopic closure of the internal orifice of the PPV became an option for the treatment of hydroceles in children . The timing of surgical intervention was one of the following conditions according to the survey of the Section on Surgery of the American Academy of Pediatrics: appearance of hydrocele after one year of age, initial onset in infancy but persistence beyond one year of age, and presence of a reducible or communicating hydrocele . The aim of this study was to evaluate the applicability, efficacy and safety of laparoscopic management of hydrocele in the pediatric age group aiming for uniform national guidelines in the indicated children for surgery, in addition to the laparoscopic evaluation of the internal inguinal ring and PPV in different types of pediatric hydroceles, this was the primary outcome. The secondary outcome was evaluation of the incidence of the contralateral PPV.
This prospective study was conducted on 93 male children with 106 hydroceles, in the period from July 2019 to June 2021, at the pediatric surgery unit, surgical department, Tanta university hospital and its affiliated hospitals, Tanta, Egypt. After approval from the institute’s Research Ethics committee, an informed consent was taken from parents or the legal guardians of each patient. The privacy of participants and confidentially of the data were considered and patient ID for each participant. We included in this study patients presented with hydrocele after one year of age, patients with initial onset of hydrocele in infancy but persistent beyond 1 year of age, presence of a reducible or communicating hydrocele. We excluded cases of Type I hydroceles described by Chang et al. (Fig. ) as they had closed IIR (the cyst does not communicate with the peritoneal cavity) . Preoperatively, all patients underwent thorough clinical examination and evaluation of the inguinoscrotal region, inguinoscrotal ultrasonography, routine pre-operative laboratory investigations were done, parents of the patients were informed about the advantages and disadvantages of the laparoscopic surgery and signed a consent for the surgery. Under general anesthesia, in the supine position, the operator and camera man stand at the head of the patient and the monitor at the end of the operating table. Longitudinal trans-umbilical incision was performed and 5-mm trocar for the scope was inserted and secured to the abdominal wall, pneumo-peritoneum created followed by exploration of the abdominal cavity and internal inguinal ring (IIR) on both sides, afterwards another two working trocars (3-mm or 5-mm) were inserted under vision on the right and left midclavicular line at the level of the umbilicus. The shape of the IIR on the same side was evaluated laparoscopically and classified according to the type of the hydrocele described by Chang et al. (Fig. ) , into Type I with closed IIR and no communication between the hydrocele and peritoneal cavity (excluded from studied cases due to closed IIR), Type II opened IIR with communication between the hydrocele and peritoneal cavity, Type III the IIR is wide open and the hydrocele does not connect to the peritoneal cavity. For Type II communicating hydrocele, the IIR was dissected like that of inguinal hernia followed by complete excision of the hydrocele or going as far as possible beyond the narrow part to avoid recurrence in the remaining part of the sac, the conjoint tendon was sutured to the ilio-pubic tract and the peritoneum was closed (Video 1). The laparoscopic management for type III hydrocele either A or B involved dissection of the IIR and delivery of the encysted hydrocele with either single or double cysts followed by wide elliptical excision of the wall and then closure of the muscle arch and peritoneum. The contralateral IIR was evaluated either it was closed or open, if it was open, dissection of the IIR followed by excision of the sac as far as possible and closure of the peritoneum with or without muscular arch repair. Unless there were any post-operative complications, the patients were discharged home on the same day, and follow-up every week during the first month then after 3, 6, and 12 months. All our cases were followed up clinically in regular visits in outpatient clinic and post-operative ultrasound was not routine for post-operative follow-up, just indicated in case of presence of post-operative recurrence of hydrocele that detected clinically.
This study included 93 male patients with 106 hydroceles, bilateral hydroceles detected by clinical examination in 13 patients (14%), right side hydroceles were in 49 patients (52.7%), and left side were in 31 patients (33.3%). After exclusion of 9 cases (8.5%) (Type I) in which the IIR was closed and all were unilateral (Fig. ), of the remaining 71 patients with unilateral hydroceles, patent contralateral internal ring was detected by ultrasound examination in 9 patients (12.7%). During laparoscope, patent contralateral internal ring was detected in 54 of the remaining 62 patients (87.1%). The age of the participants ranged from 1 to 72 months old with a mean of 24.08 ± 14.73 months. The included hydroceles Type II and III were all completed laparoscopically with no conversion to open surgery during the period of the study. As regard to the laparoscopic shape of the internal inguinal ring (IIR) on the same side, we found that the IIR was patent (Type II and III) in 97 hydroceles (91.5%) (Figs. and ) (Table ) (video 2). According to these findings and classification the procedure performed as follow. Type II (Communicating hydrocele) (78 hydroceles) was managed through excision of the sac as far as possible, evacuation of hydrocele then closure of the IIR. Type III A and B were managed similarly in addition to delivery of the encysted part (one or two cysts), evacuation followed by excision of a wide ellipse of the wall. The contralateral IIR was found to be open in 63 (88.7%) of the remaining 71 patients, dissection of the sac and closure of the IIR was done. The operative time ranged from 20 to 45 min (for one side) with a mean of 30.99 ± 7.23 min, with no intra-operative complications. The vas deferens and testicular vessels were secured and there were no injuries or bleeding. The open conversion rate was nil, and all procedures completed totally by laparoscopy. All patients were followed up in the outpatient clinic. The mean follow-up period was 13.8 ± 4.1 months (range from 6 to 23 months), and there was no evidence of recurrent hydrocele or testicular atrophy, post-operative ultrasound was not routine for post-operative follow up. We just did ultrasonography post-operatively in one patient with post-operative scrotal oedema that revealed no recurrence and was managed conservatively.
The frequency of pediatric congenital hydroceles is reported to be about 5.7% and there were many classifications describe the pathology. Martin et al. described two types of hydroceles either funicular type in which the peritoneal diverticulum communicating with the peritoneal cavity at the internal inguinal ring or the encysted type in which the cyst not communicating with the peritoneal cavity or processus vaginalis . Our results was matched with the Chang et al. classification which categorized hydroceles that do not belong either funicular or encysted type as mixed type, In which the cyst is not communicating with the peritoneal cavity but has a proximally patent processus vaginalis . Based on our results, we can modify the previous hydrocele classification of Chang et al. which was described in (Fig. ), where we can add subdivision to Type II. Type II A IIR was wide opening, Type II B covered by peritoneal seal, Type II C narrow communication with hydrocele (pin hole) (Fig. ) (Video 2). The ideal time for congenital hydrocele repair is controversial, because most of PPV will spontaneously close within 1–2 years. Therefore, most surgeons may avoid hydrocele operation within 1–2 years of life unless hernia cannot be excluded . In our study we included patients with appearance of hydrocele after 1 year of age, or persistent beyond 1 year of age, and patients with reducible or communicating hydrocele. We found that the operated patients with hydrocele under 1 year of age were 5 cases and there were all communicating hydrocele (Type II A) with wide patent processus vaginalis. Other study described the operation in the first year of life only required if it is huge in size or associated with inguinal hernia . In contrast, others reported that in the case of hydroceles with PPV, elective operation is recommended regardless of age since there is a high risk of hernia to develop due to PPV . Choi et al. in their comparative study restricted the age after two years except if comorbid ipsilateral inguinal hernia or cryptorchidism that mandate surgery before that age . Janetschek et al. in 1994 was the first to perform laparoscopic hydrocelectomy . Takehara et al. began successfully using laparoscopic percutaneous extraperitoneal closure (LPEC) to treat children with inguinal hernias . Since then, modified LPEC techniques have been reported, which differ from each other in the use of LPEC surgical devices, including self-made hernia needles, Endoclose needles, GraNee needles, Reverdin needles, subcutaneous injection needles, common suture needles and epidural needles as suturing instruments . The recurrence rate was higher with the use of percutaneous techniques described by Zahng et al. the cause of recurrence was due to reopened or mis-ligated PPV in the open group or ligature loosening that resulted in incomplete closure of the PPV in laparoscopic group. Also, Shehata MA study did not recommend LPEC due to high rate of complications and recurrence . In our study all procedures were performed totally laparoscopic using three ports. Many surgeons stated that laparoscopic surgery is only indicated for communicating hydroceles. However, Yang et al. in a 10-year experience and follow-up of laparoscopic repair of hydroceles of all types reported that 283/284 patients (99.6%) in their case series were discovered with open internal rings and PPV instead of closed internal rings whatever the type of hydrocele . Moreover, Zhang et al. reported that open PPV was found at the internal ring orifice in 98.53% of patients during laparoscopic surgery and ideal efficacy was achieved following the closure of the internal ring and percutaneous aspiration through the scrotum. Furthermore, 1.47% were confirmed to have a negative PPV or internal ring orifice and these patients were switched to the trans-scrotal procedure resulted in minimized surgical incisions compared with conventional inguinal approach . The laparoscopic approach has the advantages of less injury to the spermatic cord and spermatic duct, more cosmetic incisions and the possibility of finding and treating contralateral PPV and other abnormalities . In comparison for the results of Choi et al. who described patent IIR in all cases , we found that the IIR was closed in 9 (8.5%) hydroceles and patent in 97 (91.5%) hydroceles. However, this study matched with the results of Saka et al. who reported 97.7% of hydroceles were patent around the internal inguinal ring: 59.1% narrow patent processus vaginalis covered with peritoneal veil, and 38.6% widely open patent processus vaginalis . Our results (9 cases Type I with closed IIR) matched with Zahng et al. who had fourteen cases with closed IIR and were converted to open scrotal approach for repair . The recurrence rate after laparoscopic hydrocelectomy was reported to be 0–1.4% . In our study, there was no recurrence of hydrocele after total laparoscopic hydrocelectomy for all types. However, in Choi et al. study, there was one (0.7%) recurrence in scrotal incision hydrocelectomy (SIH) group and the recurrence rate of the whole study was 0.2% in a comparative study with SIH and total laparoscopic hydrocelectomy (TLH) . When evaluation of the contralateral IIR, our approach of total laparoscopic three ports technique allowed good visualization and detected that 88.7% of cases had contralateral PPV in comparison to Zahng et al. study of different LPEC who reported that the two-port LPEC approach is better for diagnosing contralateral PPV and reducing metachronous hernia or hydrocele than the single-port LPEC procedure . There is a controversy about operating on the contralateral side of hydrocele especially when it is not clinically relevant, but in the study, we operated on the contralateral side as well to solve the hydrocele and to safe the patient another operation in the future especially when there was patent processus vaginalis or communicating hydrocele. In our study all contralateral PPV was managed laparoscopically with complete dissection of the ring, excision of the sac and IIR closure. However, in the literature, the treatment of contralateral PPV remains controversial and the probability of hernia or hydrocele if left untreated is approximately 5.6–16% . Zahng et al. recommended that all types of contralateral PPV should be treated, and advised ligation if the opening larger than 2 mm, and the peritoneal orifice is torn with forceps when its diameter less than 2 mm .
Laparoscopic hydrocelectomy is safe, applicable and feasible for management of different types of hydroceles in pediatric age group. The IIR is patent in nearly all cases with or without communication to the hydrocele. Laparoscopic hydrocelectomy with IIR closure is essential in preventing recurrence. The contralateral IIR can be managed laparoscopically in the same session.
This study had some limitations, small number of patients for common surgical entity. It was relatively a short-term prospective study at a single tertiary center. Total laparoscopic hydrocelectomy was performed for all types with no comparison to the laparoscopic-assisted or the conventional open procedures and further studies for comparison are recommended.
Below is the link to the electronic supplementary material. Video 1: Type II hydrocelectomy (MP4 103540 kb) Video 2: Types of hydroceles (MP4 118328 kb)
|
Influence of Diversity Nursing on Patients' Rehabilitation in Cardiology Treatment | 40647be7-e702-4b5a-95cc-1a298ce343ea | 8670917 | Internal Medicine[mh] | 1.1. Background and Significance In recent years, with the development and progress of the society and the improvement of economic living standards, the so-called “disease of wealth” has become more obvious among the population. Although people's quality of life has improved, there is often overeating, resulting in obesity, which can lead to hypertension, coronary heart disease, diabetes, heart disease, or other cardiovascular diseases. These diseases are mainly caused by the body's fat, sugar level, and blood pressure . The surplus of living conditions makes this kind of disease more rampant in the population, especially in the middle-aged and elderly people over 50 years. The decline of their own body functions reduces the immunity of the middle-aged and elderly people to resist the disease, which shows a very high mortality rate. Diversified nursing is a treatment method that has become more popular recently and has a relatively good clinical diagnosis and treatment effect. Through the early preventive treatment of the patient, the routine treatment during the operation, and the psychological treatment of the patient after the operation, the comprehensive treatment of the patient is realized. Nursing is a kind of humane nursing method. This article discusses that the treatment effect of diversified nursing for cardiology patients is in line with the current era background and the status quo of technological development. At the same time, this new type of care will benefit the clinical treatment of cardiology and is of great significance for improving the quality of life of cardiology patients. 1.2. Related Work A large amount of information and data show that cardiovascular disease is already an important type of disease that threatens human health, and research on its treatment has a long history. Rodríguez Padial, Luis and Barón-Esquivias, and Gonzalo have pointed out that cardiovascular disease is the main cause of death in the world. Coronary artery disease, atrial fibrillation, or hypertensive heart disease constitutes one of the most important cardiovascular diseases, and hypertension is an important risk factor for cardiovascular death. Therefore, the control of hypertension has become the primary task of preventing major complications . In addition, the Chinese Expert Consensus Group on the Diagnosis and Treatment of Cardiovascular Diseases and Insomnia has also stated that cardiovascular disease (CVD) is accompanied by high morbidity and mortality of insomnia, which has led to much attention on the relationship between insomnia and CVD, such as coronary heart disease, hypertension, heart failure, and psychological heart disease. For this kind of insomnia symptoms, special attention should be paid to nursing work. Massage can eliminate fatigue and promote blood circulation or use traditional Chinese medicine acupuncture and moxibustion techniques to effectively relieve insomnia and improve human health. Many studies have shown that patients with cardiovascular disease are more likely to fall into insomnia than healthy people. In addition, severe insomnia brings troubles to CVD patients and seriously affects the treatment process and prognosis of CVD. However, there is a lack of practical guidance for the diagnosis and treatment of this comorbidity. Therefore, there is an urgent need for a special consensus statement to provide guidance for the diagnosis and treatment of cardiovascular disease combined with insomnia . In order to alleviate the huge pain and injury caused by cardiovascular disease, Klainin-Yobas, Piyanee, and others proposed to use psychological intervention to improve the suffering caused by cardiovascular disease (CVD). They conducted a comprehensive literature search to identify published and unpublished randomized controlled trials (RCTs) in English between 2000 and 2015. Two of the reviewers independently screened and assessed the risk of bias and extracted data. At the same time, they used a comprehensive meta-analysis software package to analyze the extracted data and used the hedge size to determine the effect of psychosocial intervention. After the test, they found that the average effect sizes for stress, anxiety, depression, and depression/anxiety syndromes were 0.34, 1.04, 0.42, and 0.67, respectively. In subsequent evaluations, these numbers were 0.09, 0.65, 0.22, and 0.09, respectively . From previous research results, it can be seen that cardiovascular disease does have a fatal threat to humans and there is no effective treatment plan to diagnose and treat the disease. The damage caused by cardiovascular disease is not only physical but also psychological to patients. It also caused a great burden and seriously affected people's quality of life. This article proposes the use of diversified nursing methods to treat patients with cardiovascular diseases, including preprevention, intraoperative diagnosis and treatment, and postoperative prognosis. It is hoped that this comprehensive and humanized nursing method will increase the patient's recovery rate and improve them. 1.3. Innovations in This Article The innovations of this article are mainly reflected in the following aspects: (1) Cardiovascular disease is currently an important type of disease that endangers human health. It has a very high mortality and disability rate in China. The treatment of cardiovascular disease in this article is consistent with the current era background and has social practical significance. As there is currently no effective way to prevent cardiovascular disease, finding new solutions to mitigate the damage of cardiovascular disease from a multicare perspective has new implications for the psychological and physical health of heart patients. (2) Diversified nursing is a new humanized nursing method, which has good effects on the diagnosis, treatment, and prognosis of patients with cardiovascular diseases. This article explores the impact of diverse nursing on the treatment of cardiology patients. It can further explore the great efficacy of diversified nursing in clinical diagnosis and treatment. (3) In this paper, 300 patients in our hospital's cardiology department are used as experimental subjects through random sampling and a controlled experiment on diversified nursing has been carried out. The data obtained is all from the province. The clinical data of the Department of Cardiology of the hospital and the experimental results are scientific.
In recent years, with the development and progress of the society and the improvement of economic living standards, the so-called “disease of wealth” has become more obvious among the population. Although people's quality of life has improved, there is often overeating, resulting in obesity, which can lead to hypertension, coronary heart disease, diabetes, heart disease, or other cardiovascular diseases. These diseases are mainly caused by the body's fat, sugar level, and blood pressure . The surplus of living conditions makes this kind of disease more rampant in the population, especially in the middle-aged and elderly people over 50 years. The decline of their own body functions reduces the immunity of the middle-aged and elderly people to resist the disease, which shows a very high mortality rate. Diversified nursing is a treatment method that has become more popular recently and has a relatively good clinical diagnosis and treatment effect. Through the early preventive treatment of the patient, the routine treatment during the operation, and the psychological treatment of the patient after the operation, the comprehensive treatment of the patient is realized. Nursing is a kind of humane nursing method. This article discusses that the treatment effect of diversified nursing for cardiology patients is in line with the current era background and the status quo of technological development. At the same time, this new type of care will benefit the clinical treatment of cardiology and is of great significance for improving the quality of life of cardiology patients.
A large amount of information and data show that cardiovascular disease is already an important type of disease that threatens human health, and research on its treatment has a long history. Rodríguez Padial, Luis and Barón-Esquivias, and Gonzalo have pointed out that cardiovascular disease is the main cause of death in the world. Coronary artery disease, atrial fibrillation, or hypertensive heart disease constitutes one of the most important cardiovascular diseases, and hypertension is an important risk factor for cardiovascular death. Therefore, the control of hypertension has become the primary task of preventing major complications . In addition, the Chinese Expert Consensus Group on the Diagnosis and Treatment of Cardiovascular Diseases and Insomnia has also stated that cardiovascular disease (CVD) is accompanied by high morbidity and mortality of insomnia, which has led to much attention on the relationship between insomnia and CVD, such as coronary heart disease, hypertension, heart failure, and psychological heart disease. For this kind of insomnia symptoms, special attention should be paid to nursing work. Massage can eliminate fatigue and promote blood circulation or use traditional Chinese medicine acupuncture and moxibustion techniques to effectively relieve insomnia and improve human health. Many studies have shown that patients with cardiovascular disease are more likely to fall into insomnia than healthy people. In addition, severe insomnia brings troubles to CVD patients and seriously affects the treatment process and prognosis of CVD. However, there is a lack of practical guidance for the diagnosis and treatment of this comorbidity. Therefore, there is an urgent need for a special consensus statement to provide guidance for the diagnosis and treatment of cardiovascular disease combined with insomnia . In order to alleviate the huge pain and injury caused by cardiovascular disease, Klainin-Yobas, Piyanee, and others proposed to use psychological intervention to improve the suffering caused by cardiovascular disease (CVD). They conducted a comprehensive literature search to identify published and unpublished randomized controlled trials (RCTs) in English between 2000 and 2015. Two of the reviewers independently screened and assessed the risk of bias and extracted data. At the same time, they used a comprehensive meta-analysis software package to analyze the extracted data and used the hedge size to determine the effect of psychosocial intervention. After the test, they found that the average effect sizes for stress, anxiety, depression, and depression/anxiety syndromes were 0.34, 1.04, 0.42, and 0.67, respectively. In subsequent evaluations, these numbers were 0.09, 0.65, 0.22, and 0.09, respectively . From previous research results, it can be seen that cardiovascular disease does have a fatal threat to humans and there is no effective treatment plan to diagnose and treat the disease. The damage caused by cardiovascular disease is not only physical but also psychological to patients. It also caused a great burden and seriously affected people's quality of life. This article proposes the use of diversified nursing methods to treat patients with cardiovascular diseases, including preprevention, intraoperative diagnosis and treatment, and postoperative prognosis. It is hoped that this comprehensive and humanized nursing method will increase the patient's recovery rate and improve them.
The innovations of this article are mainly reflected in the following aspects: (1) Cardiovascular disease is currently an important type of disease that endangers human health. It has a very high mortality and disability rate in China. The treatment of cardiovascular disease in this article is consistent with the current era background and has social practical significance. As there is currently no effective way to prevent cardiovascular disease, finding new solutions to mitigate the damage of cardiovascular disease from a multicare perspective has new implications for the psychological and physical health of heart patients. (2) Diversified nursing is a new humanized nursing method, which has good effects on the diagnosis, treatment, and prognosis of patients with cardiovascular diseases. This article explores the impact of diverse nursing on the treatment of cardiology patients. It can further explore the great efficacy of diversified nursing in clinical diagnosis and treatment. (3) In this paper, 300 patients in our hospital's cardiology department are used as experimental subjects through random sampling and a controlled experiment on diversified nursing has been carried out. The data obtained is all from the province. The clinical data of the Department of Cardiology of the hospital and the experimental results are scientific.
2.1. Cardiology The Department of Cardiology is a clinical department set up by hospitals at all levels for the treatment of cardiovascular diseases. It mainly treats diseases such as hypertension, coronary heart disease, angina pectoris, myocardial infarction, sudden death, arrhythmia, and acute myocardial infarction . Heart disease is a disease caused by an abnormal function or structural defect of the heart. It is a collective term for all heart diseases, including congenital heart disease and acquired heart disease, with different types of heart diseases manifesting themselves in different ways. Heart disease is one of the most common diseases in life. All provinces and cities across the country have expanded the scale of cardiology departments and strengthened the team building of cardiology medical staff. To a certain extent, it has played a great role in the treatment of domestic cardiovascular diseases . This article mainly discusses the clinical diagnosis and treatment of cardiology patients. Before this, we must first understand the pathology and some clinical characteristics of cardiology diseases. 2.1.1. Cardiovascular Disease “Cardiovascular” is the collective term for the heart and blood vessels. Usually, the cardiovascular system is the blood circulation system including the heart and blood vessels. All types of diseases related to the human heart and blood vessels can be collectively referred to as cardiovascular disease, or heart disease for short. The cardiovascular system is made up of the heart and blood vessels. The heart carries blood to the arteries of the body, where it is exchanged for nutrients, oxygen, and metabolic waste via the capillaries, after which the blood enters the veins and is carried back to the heart by the veins of the body, thus forming the cardiovascular system. When the heart and the blood vessels connected to it become diseased, it is called cardiovascular disease, including insufficient blood supply to the heart, cardiac arrhythmia, cardiac hypertrophy, and cardiovascular disease, all of which can affect the function of the cardiovascular system. Cardiovascular diseases usually occur in middle-aged and elderly people over the age of 50. However, in recent years, such diseases have been seen in younger generations and have become a major public health problem. Many young people used to think that hypertension is a disease that only elderly people get and it has nothing to do with them. However, there are data reports showing that the incidence of hypertension has reached 8% among Chinese primary and middle school students aged 6–18. This suggests that cardiovascular disease is increasingly targeted at a younger age and that the perception that younger people do not develop cardiovascular disease is misplaced. These survey data tell us that the symptoms of cardiovascular disease need to be taken seriously at all age groups. The onset of cardiovascular disease is hidden, the previous symptoms are not obvious, the onset of the disease is long, and it will gradually erode all cells and organs of the human body without the human body consciousness. In addition, these cardiovascular diseases are all related to each other and even cause and affect each other. Whenever suffering from a certain cardiovascular disease, the patient is likely to cause various other diseases at the same time. For example, diabetes patients are often accompanied by high blood pressure and coronary heart disease . 2.1.2. Pathogenesis of Cardiovascular Disease The causes of cardiovascular disease can be divided into congenital and acquired factors. Congenital factors mainly include family inheritance and own genetic mutations; acquired factors are mainly due to the patient's usual poor lifestyle, such as staying up late, smoking, alcoholism, and unhealthy diet habits; in addition, the body may also be affected by the adverse reactions of certain drugs, leading to the occurrence of such diseases . In short, the occurrence of cardiovascular disease is often a combination of many factors. 2.1.3. Common Types of Cardiovascular Diseases and Their Clinical Manifestations Cardiovascular diseases include a wide variety of diseases, mainly including the following. (1) Hypertension. Hypertension is one of the most common types of cardiovascular diseases, most of which have a slow onset and no special clinical manifestations. Typical hypertensive patients often present with dizziness, headache, and flushing. In addition, patients with severe hypertension will experience damage to the heart, brain, kidney, and eye retina, manifested as chest tightness, shortness of breath, uncoordinated limbs and even hemiplegia, decreased language ability, swollen feet, decreased vision, etc. (2) Coronary Heart Disease. Mainly including angina and myocardial infarction, the damaged organ is the heart in this disease. When patients with angina pectoris are emotionally excited or overworked, they will experience severe pain in the sternum, accompanied by shoulder pain. The symptoms of myocardial infarction and angina pectoris are similar, but the former is even worse. Generally, patients with myocardial infarction have pain for longer and more intense pain. This often attacks when resting, accompanied by pale complexion, sweating, and limbs. (3) Heart Failure. Heart failure is also a disease of impaired cardiac function. The main manifestation is that the patient's working ability and endurance are significantly reduced, breathing is difficult, and there is always sputum in the throat. When sleeping at night, you need to lie on your side to fall asleep; even you may need to sit all night to sleep, and your legs may experience edema, accompanied by bloating and loss of appetite. (4) Arrhythmia. Arrhythmia includes chronic arrhythmia and tachyarrhythmia. Chronic arrhythmia in patients mainly manifests with dizziness, weakness of the limbs, frequent fainting, and unconsciousness; tachyarrhythmia in patients mainly manifest with palpitations, palpitation, and often feeling short of breath. 2.2. Diversity Care 2.2.1. Routine Nursing Content General nursing work can mainly provide sanitary care and assistance to patients who have lost the ability to take care of themselves, including some daily life, such as washing hair and bathing, changing clothes, wiping the body, and cutting nails. Different parts have different care impacts . The specific methods are as follows: Shampoo and Shower . For patients who are unable to take care of themselves, nursing staff should help them wash their hair and take a bath and wipe them dry in time after washing to avoid cold aggravation. In addition, when washing their hair, the nurse should pay attention to the patient's complexion, breathing, and pulse. If there is any abnormality, the nurse should stop washing their hair immediately. When taking a bath, the method of bathing on the hospital bed should be generally used, the water temperature is to be controlled at about 40–45°C, and the bathing action must be as gentle as possible. Oral Care . Nursing staff should help patients brush their teeth after eating. Because the mouth is warm and moist and there are food residues after eating, it is easy to breed bacteria, so they need to brush their teeth frequently. For patients who wear dentures, they should be removed and rinsed with cold water. Nursing Care for Patients with Severe Coma . Nursing staff must strengthen the oral care of patients and turn over and wipe their back frequently to prevent bedsores from breeding. People with incontinence need to change their bed linen and clothing in time. Care for Children . Child patients have low immunity, weak resistance, susceptibility to diseases, and low awareness of cleaning. Therefore, nursing staff should always pay attention to and strengthen the cleaning of child patients, like washing hands and teeth before meals and after meals, taking a bath, and changing clothes frequently. Nursing Care for Elderly Patients . Due to the decline in various functions of the body of the elderly, the amount of activity is relatively small and the reaction is slow. Therefore, nursing staff should strengthen the skin care of the elderly to prevent bedsores. 2.2.2. Diversity Nursing With the progress of society and the further development of nursing work, general nursing work gradually can no longer meet the needs of different patients and, at this time, diversified nursing has emerged . Diversity nursing, as the name implies, has a variety of nursing methods and nursing skills and provides different and specific nursing assistance according to the different needs of different patients. Therefore, diversity nursing can also be called personalized or humanized nursing, which can maximize a full range of care given to the patient to enhance and assist the patient's recovery . As a new nursing concept, diversified nursing is a comprehensive nursing care that adds to the psychological and social changes and differences of patients on the basis of traditional nursing work. The focus of its work is different from the routine clinical diagnosis and treatment in medical treatment. It does not only provide health services for the patient's disease itself but also covers the physical health of the patient, the psychological emotions, and even the psychological emotions of the patient's family members. Each aspect involves the needs of the person as a whole . Psychological care is a method and means for nursing staff to influence and change the patient's psychological state and behavior by their behavior in their interactions with the patient to promote their recovery. To do a good job in psychological care of patients, it is important to understand and grasp the psychological activities of patients. Responses are made according to psychological reactions. Care in terms of physical health is to enhance interventions by taking care of the symptoms diagnosed by the doctor, keeping in mind the precautions to be taken. It can be seen that diversified nursing is a sublimation of traditional nursing work. It is different from the general clinical nursing process. The role of the nurse, which was limited to the needs of medical and health institutions, has been expanded to include hospital, community, and family services, and the workplace of the nurse has changed from the hospital to the community and family. Diversified nursing is targeted nursing based on the specific conditions of different patients. The nursing staff is under the arrangement of the attending doctor. The staff provide corresponding nursing intervention and assist with the patient's condition. This help not only targets the disease itself but also comforts and guides the patient's psychology and spirit. In the course of illness, the disease will bring a series of changes to the patient's body and mind, and these changes will have a substantial impact on the patient's disease process, so it is assorted to help the patient recognize these problems and adapt to this change. Palpitations are a combination of subjective sensations and objective signs. Subjectively, the patient feels that the heart is beating rapidly, irregularly or forcefully, but also that the heart rate is too fast, too slow or too slow; dyspnoea, which is a combination of subjective sensations and objective signs. Subjectively, the patient feels that breathing is laboured, and objectively, the number of breaths is increased and the movements are fast and large causing cyanosis, which is a bluish colour on the mucous membranes and skin, and the absolute value of reduced haemoglobin in the body to be over 59% of unoxygenated haemoglobin . 2.3. Adjuvant Treatment of Diversified Nursing for Cardiology Patients 2.3.1. Routine Monitoring of the Condition Under normal circumstances, there is only one doctor in charge of a patient and the doctor sometimes needs to take care of multiple patients. At this time, the work of the nursing staff is very important. The nursing staff needs to pay attention to the patient's vital signs at all times and provide assistance when necessary. The doctor helps the patient to perform various physical examinations, and the doctor must be informed in time if there is any abnormality. Therefore, nurses also need to understand various medical equipment and instruments . With the development of modern medical technology, medical imaging technology and image data processing technology are becoming more and more common in clinical medicine. The use of these technical means can help doctors observe and understand the patient's condition and improve the efficiency of diagnosis and treatment and the recovery rate of the condition . For some related theories and algorithms of medical imaging technology and image data processing technology, this article also makes some brief introductions . In clinical diagnosis and treatment, doctors can take the patient's cell and blood information with the help of medical imaging equipment. By observing the patient's cell image, they can learn about the patient's condition and make treatment judgments. However, in order to improve the quality and effect of diagnosis and treatment, it is often necessary to perform further processing on the acquired patient's relevant cell images to improve the quality of the images and facilitate the observation and judgment of doctors. Computer image data processing technology is involved here. Generally, the processing of medical images includes image denoising and enhancement, registration, and feature extraction , specifically in the following aspects. (1) Denoising Enhancement. In the process of image acquisition, due to various interferences from the outside world, the image will appear to be noisy, so the image needs to be denoised and enhanced. Here, we use the weighted average method for processing. Image denoising is an operation that reduces the amount of useless or distracting useful information in an image. So, a good noise reduction operation requires an understanding of the sources of noise and noise characteristics. Image enhancement is the operation of highlighting the useful information in an image. Deeply skilled image enhancement will differentiate between target groups, understanding their visual background sensitivity preferences, light sensitivity, colour sensitivity, and sharpness. Assuming that the gray value of point ( x , y ) is A ( x , y ), take a 3 × 3 template, and the reciprocal of the gradient is (1) G x , y : i , j = 1 A x + i , y + j − A x , y . Among them, x and y represent the position of the point in the target image and i and j represent the sequence in the target image. The weight matrix formed by the reciprocal of the gray scale is (2) W = w i − 1. j − 1 w i − 1 , j w i − 1 , j + 1 w i , j − 1 w i , j w i , j + 1 w i + 1 , j − 1 w i + 1 , j − 1 w i + 1 , j + 1 . Here, w ( i , j )=(1/2) and the sum of other values is also (1/2); then, (3) w x + i , y + j = 1 2 G x , y : i , j ∑ i ∑ j G x , y : i , j . After that, the median filter method is used to denoise the image. The principle is to use a sliding window of length m with an odd number of points and replace the gray value of the center point pixel with the median value of the gray value of each point in the window value, expressed by (4) G i = Med A i − e , … , A i + e . Then, the dual-tree complex wavelet transform method is used to filter. The dual-tree complex wavelet is obtained on the basis of the complex wavelet, and a filter is added on the basis of one filter, so that the complex wavelet transform is in the two filters. The one-dimensional data transformation formula is (5) ψ t = ψ h t + i ψ g t . When the complex wavelet becomes a dual-tree complex wavelet, its two-dimensional data transformation formula is (6) ψ x , y = ψ x ψ y . The dual-tree complex wavelet has better effect in decomposition and reconstruction and is more conducive to the processing of image information details. (2) Registration. The purpose of image registration is to find a certain matching relationship so that the image to be registered can be matched with the source image. Common methods include rigid body transformation, affine transformation, projection transformation, and nonlinear transformation. First is the rigid body transformation. The principle is that the distance between the corresponding two points in the image before and after the transformation is unchanged and the formula is as follows: (7) y ′ = b 00 + b 10 x + b 01 y + b 20 x 2 + b 11 x y + b 02 y 2 + ⋯ . Here, ( x , y ) is the point in the image before transformation and ( x ′, y ′) is the point in the image after transformation. Second is affine transformation. It is a linear transformation, from two-dimensional coordinates to two-dimensional coordinates, after the conversion still maintains the flatness of the two-dimensional image, and the formula is (8) x ′ y ′ = a 11 a 12 a 21 a 22 x y + t x t y , where a 11 a 12 a 21 a 22 is a real matrix. Third is the projection transformation. It means that the nature of the line of the image has not changed before and after the image transformation, but the parallel relationship between the two has changed. This transformation is called projection transformation, and the formula is (9) x ′ y ′ = a 11 a 12 a 13 a 21 a 22 a 23 x y + t x t y , where a 11 a 12 a 13 a 21 a 22 a 23 is a real matrix. Fourth is nonlinear transformation. Nonlinear transformation is just the opposite of projection transformation. When the image is transformed, the straight line in the image is no longer a straight line. At this time, it is a nonlinear transformation. The formula is (10) x ′ , y ′ = F x , y . Here, F represents the mapping function after image transformation. Nonlinear transformation is common in two-dimensional space, and its polynomial function expression is (11) x ′ = a 00 + a 10 x + a 01 y + a 20 x 2 + a 11 x y + a 02 y 2 + ⋯ ⋯ , y ′ = b 00 + b 10 x + b 01 y + b 20 x 2 + b 11 x y + b 02 y 2 + ⋯ ⋯ . (3) Feature Extraction. The radient-based edge detection method is the first technique. Assume the two-dimensional Gaussian function G ( x , y ) and use it to smooth the image; the Gaussian function formula is (12) G x , y = 1 2 π σ 2 exp − x 2 + y 2 2 σ 2 . After that, the first-order partial derivative is used to calculate the value and direction of the gradient and the nonlimit large value suppression is performed on the gradient value. Finally, the double threshold algorithm is used to detect and connect the edges. The second technique is edge feature extraction method based on morphology. Extract the inner and outer edges of the target image, respectively, such as equations ( ) and ( ), and then use the knowledge of morphology for calculation: (13) e 1 A = A − A Θ B , (14) e 2 A = A ⊕ B − A . The third technique is the feature extraction method based on scale invariance. Set the scale space L ( x , y , σ ) of the target image to a Gaussian function G ( x , y , σ ) with varying scales, and convolve it with the original image I ( x , y ) to get (15) L x , y , σ = G x , y , σ ∗ I x , y . The calculation formula of Gaussian function is (16) G x , y , σ = 1 2 π σ 2 e − x − m / 2 2 + y − n / 2 2 / 2 σ 2 . Here, ( x , y ) represents the pixel position of the image, ∗ represents the convolution operation, m , n is the dimension of the Gaussian template, and σ represents the scale space factor. Then, determine the key points and directions. The feature descriptor is extracted at the key point, and the direction value is assigned to the feature vector of the key point. The calculation formulas of the gradient modulus value and the direction value are, respectively, (17) m x , y = L x + 1 , y − L x , y + 1 2 + L x , y + 1 − L x , y − 1 2 , θ x , y = tan − 1 L x , y + 1 − L x , y − 1 L x + 1 , y − L x − 1 , y . 2.3.2. Daily Nursing and Psychological Treatment of Patients There is a detailed introduction to the daily life and nursing of the patient, so we will not repeat it here. In addition, there is a psychological and spiritual guidance and treatment for the patient. Cardiology patients, especially those suffering from multiple complications at the same time, are under great psychological and spiritual pressure. At this time, nursing staff need to carry out more psychological relief and guidance for them, so that they can actively cooperate with the treatment. The condition will heal quickly. In addition, you can give them more health education about drugs and diet to help them popularize health knowledge. In addition, proper playing of soothing and relaxing music is also conducive to the relaxation of their mood and helps them release bad emotions .
The Department of Cardiology is a clinical department set up by hospitals at all levels for the treatment of cardiovascular diseases. It mainly treats diseases such as hypertension, coronary heart disease, angina pectoris, myocardial infarction, sudden death, arrhythmia, and acute myocardial infarction . Heart disease is a disease caused by an abnormal function or structural defect of the heart. It is a collective term for all heart diseases, including congenital heart disease and acquired heart disease, with different types of heart diseases manifesting themselves in different ways. Heart disease is one of the most common diseases in life. All provinces and cities across the country have expanded the scale of cardiology departments and strengthened the team building of cardiology medical staff. To a certain extent, it has played a great role in the treatment of domestic cardiovascular diseases . This article mainly discusses the clinical diagnosis and treatment of cardiology patients. Before this, we must first understand the pathology and some clinical characteristics of cardiology diseases. 2.1.1. Cardiovascular Disease “Cardiovascular” is the collective term for the heart and blood vessels. Usually, the cardiovascular system is the blood circulation system including the heart and blood vessels. All types of diseases related to the human heart and blood vessels can be collectively referred to as cardiovascular disease, or heart disease for short. The cardiovascular system is made up of the heart and blood vessels. The heart carries blood to the arteries of the body, where it is exchanged for nutrients, oxygen, and metabolic waste via the capillaries, after which the blood enters the veins and is carried back to the heart by the veins of the body, thus forming the cardiovascular system. When the heart and the blood vessels connected to it become diseased, it is called cardiovascular disease, including insufficient blood supply to the heart, cardiac arrhythmia, cardiac hypertrophy, and cardiovascular disease, all of which can affect the function of the cardiovascular system. Cardiovascular diseases usually occur in middle-aged and elderly people over the age of 50. However, in recent years, such diseases have been seen in younger generations and have become a major public health problem. Many young people used to think that hypertension is a disease that only elderly people get and it has nothing to do with them. However, there are data reports showing that the incidence of hypertension has reached 8% among Chinese primary and middle school students aged 6–18. This suggests that cardiovascular disease is increasingly targeted at a younger age and that the perception that younger people do not develop cardiovascular disease is misplaced. These survey data tell us that the symptoms of cardiovascular disease need to be taken seriously at all age groups. The onset of cardiovascular disease is hidden, the previous symptoms are not obvious, the onset of the disease is long, and it will gradually erode all cells and organs of the human body without the human body consciousness. In addition, these cardiovascular diseases are all related to each other and even cause and affect each other. Whenever suffering from a certain cardiovascular disease, the patient is likely to cause various other diseases at the same time. For example, diabetes patients are often accompanied by high blood pressure and coronary heart disease . 2.1.2. Pathogenesis of Cardiovascular Disease The causes of cardiovascular disease can be divided into congenital and acquired factors. Congenital factors mainly include family inheritance and own genetic mutations; acquired factors are mainly due to the patient's usual poor lifestyle, such as staying up late, smoking, alcoholism, and unhealthy diet habits; in addition, the body may also be affected by the adverse reactions of certain drugs, leading to the occurrence of such diseases . In short, the occurrence of cardiovascular disease is often a combination of many factors. 2.1.3. Common Types of Cardiovascular Diseases and Their Clinical Manifestations Cardiovascular diseases include a wide variety of diseases, mainly including the following. (1) Hypertension. Hypertension is one of the most common types of cardiovascular diseases, most of which have a slow onset and no special clinical manifestations. Typical hypertensive patients often present with dizziness, headache, and flushing. In addition, patients with severe hypertension will experience damage to the heart, brain, kidney, and eye retina, manifested as chest tightness, shortness of breath, uncoordinated limbs and even hemiplegia, decreased language ability, swollen feet, decreased vision, etc. (2) Coronary Heart Disease. Mainly including angina and myocardial infarction, the damaged organ is the heart in this disease. When patients with angina pectoris are emotionally excited or overworked, they will experience severe pain in the sternum, accompanied by shoulder pain. The symptoms of myocardial infarction and angina pectoris are similar, but the former is even worse. Generally, patients with myocardial infarction have pain for longer and more intense pain. This often attacks when resting, accompanied by pale complexion, sweating, and limbs. (3) Heart Failure. Heart failure is also a disease of impaired cardiac function. The main manifestation is that the patient's working ability and endurance are significantly reduced, breathing is difficult, and there is always sputum in the throat. When sleeping at night, you need to lie on your side to fall asleep; even you may need to sit all night to sleep, and your legs may experience edema, accompanied by bloating and loss of appetite. (4) Arrhythmia. Arrhythmia includes chronic arrhythmia and tachyarrhythmia. Chronic arrhythmia in patients mainly manifests with dizziness, weakness of the limbs, frequent fainting, and unconsciousness; tachyarrhythmia in patients mainly manifest with palpitations, palpitation, and often feeling short of breath.
“Cardiovascular” is the collective term for the heart and blood vessels. Usually, the cardiovascular system is the blood circulation system including the heart and blood vessels. All types of diseases related to the human heart and blood vessels can be collectively referred to as cardiovascular disease, or heart disease for short. The cardiovascular system is made up of the heart and blood vessels. The heart carries blood to the arteries of the body, where it is exchanged for nutrients, oxygen, and metabolic waste via the capillaries, after which the blood enters the veins and is carried back to the heart by the veins of the body, thus forming the cardiovascular system. When the heart and the blood vessels connected to it become diseased, it is called cardiovascular disease, including insufficient blood supply to the heart, cardiac arrhythmia, cardiac hypertrophy, and cardiovascular disease, all of which can affect the function of the cardiovascular system. Cardiovascular diseases usually occur in middle-aged and elderly people over the age of 50. However, in recent years, such diseases have been seen in younger generations and have become a major public health problem. Many young people used to think that hypertension is a disease that only elderly people get and it has nothing to do with them. However, there are data reports showing that the incidence of hypertension has reached 8% among Chinese primary and middle school students aged 6–18. This suggests that cardiovascular disease is increasingly targeted at a younger age and that the perception that younger people do not develop cardiovascular disease is misplaced. These survey data tell us that the symptoms of cardiovascular disease need to be taken seriously at all age groups. The onset of cardiovascular disease is hidden, the previous symptoms are not obvious, the onset of the disease is long, and it will gradually erode all cells and organs of the human body without the human body consciousness. In addition, these cardiovascular diseases are all related to each other and even cause and affect each other. Whenever suffering from a certain cardiovascular disease, the patient is likely to cause various other diseases at the same time. For example, diabetes patients are often accompanied by high blood pressure and coronary heart disease .
The causes of cardiovascular disease can be divided into congenital and acquired factors. Congenital factors mainly include family inheritance and own genetic mutations; acquired factors are mainly due to the patient's usual poor lifestyle, such as staying up late, smoking, alcoholism, and unhealthy diet habits; in addition, the body may also be affected by the adverse reactions of certain drugs, leading to the occurrence of such diseases . In short, the occurrence of cardiovascular disease is often a combination of many factors.
Cardiovascular diseases include a wide variety of diseases, mainly including the following. (1) Hypertension. Hypertension is one of the most common types of cardiovascular diseases, most of which have a slow onset and no special clinical manifestations. Typical hypertensive patients often present with dizziness, headache, and flushing. In addition, patients with severe hypertension will experience damage to the heart, brain, kidney, and eye retina, manifested as chest tightness, shortness of breath, uncoordinated limbs and even hemiplegia, decreased language ability, swollen feet, decreased vision, etc. (2) Coronary Heart Disease. Mainly including angina and myocardial infarction, the damaged organ is the heart in this disease. When patients with angina pectoris are emotionally excited or overworked, they will experience severe pain in the sternum, accompanied by shoulder pain. The symptoms of myocardial infarction and angina pectoris are similar, but the former is even worse. Generally, patients with myocardial infarction have pain for longer and more intense pain. This often attacks when resting, accompanied by pale complexion, sweating, and limbs. (3) Heart Failure. Heart failure is also a disease of impaired cardiac function. The main manifestation is that the patient's working ability and endurance are significantly reduced, breathing is difficult, and there is always sputum in the throat. When sleeping at night, you need to lie on your side to fall asleep; even you may need to sit all night to sleep, and your legs may experience edema, accompanied by bloating and loss of appetite. (4) Arrhythmia. Arrhythmia includes chronic arrhythmia and tachyarrhythmia. Chronic arrhythmia in patients mainly manifests with dizziness, weakness of the limbs, frequent fainting, and unconsciousness; tachyarrhythmia in patients mainly manifest with palpitations, palpitation, and often feeling short of breath.
2.2.1. Routine Nursing Content General nursing work can mainly provide sanitary care and assistance to patients who have lost the ability to take care of themselves, including some daily life, such as washing hair and bathing, changing clothes, wiping the body, and cutting nails. Different parts have different care impacts . The specific methods are as follows: Shampoo and Shower . For patients who are unable to take care of themselves, nursing staff should help them wash their hair and take a bath and wipe them dry in time after washing to avoid cold aggravation. In addition, when washing their hair, the nurse should pay attention to the patient's complexion, breathing, and pulse. If there is any abnormality, the nurse should stop washing their hair immediately. When taking a bath, the method of bathing on the hospital bed should be generally used, the water temperature is to be controlled at about 40–45°C, and the bathing action must be as gentle as possible. Oral Care . Nursing staff should help patients brush their teeth after eating. Because the mouth is warm and moist and there are food residues after eating, it is easy to breed bacteria, so they need to brush their teeth frequently. For patients who wear dentures, they should be removed and rinsed with cold water. Nursing Care for Patients with Severe Coma . Nursing staff must strengthen the oral care of patients and turn over and wipe their back frequently to prevent bedsores from breeding. People with incontinence need to change their bed linen and clothing in time. Care for Children . Child patients have low immunity, weak resistance, susceptibility to diseases, and low awareness of cleaning. Therefore, nursing staff should always pay attention to and strengthen the cleaning of child patients, like washing hands and teeth before meals and after meals, taking a bath, and changing clothes frequently. Nursing Care for Elderly Patients . Due to the decline in various functions of the body of the elderly, the amount of activity is relatively small and the reaction is slow. Therefore, nursing staff should strengthen the skin care of the elderly to prevent bedsores. 2.2.2. Diversity Nursing With the progress of society and the further development of nursing work, general nursing work gradually can no longer meet the needs of different patients and, at this time, diversified nursing has emerged . Diversity nursing, as the name implies, has a variety of nursing methods and nursing skills and provides different and specific nursing assistance according to the different needs of different patients. Therefore, diversity nursing can also be called personalized or humanized nursing, which can maximize a full range of care given to the patient to enhance and assist the patient's recovery . As a new nursing concept, diversified nursing is a comprehensive nursing care that adds to the psychological and social changes and differences of patients on the basis of traditional nursing work. The focus of its work is different from the routine clinical diagnosis and treatment in medical treatment. It does not only provide health services for the patient's disease itself but also covers the physical health of the patient, the psychological emotions, and even the psychological emotions of the patient's family members. Each aspect involves the needs of the person as a whole . Psychological care is a method and means for nursing staff to influence and change the patient's psychological state and behavior by their behavior in their interactions with the patient to promote their recovery. To do a good job in psychological care of patients, it is important to understand and grasp the psychological activities of patients. Responses are made according to psychological reactions. Care in terms of physical health is to enhance interventions by taking care of the symptoms diagnosed by the doctor, keeping in mind the precautions to be taken. It can be seen that diversified nursing is a sublimation of traditional nursing work. It is different from the general clinical nursing process. The role of the nurse, which was limited to the needs of medical and health institutions, has been expanded to include hospital, community, and family services, and the workplace of the nurse has changed from the hospital to the community and family. Diversified nursing is targeted nursing based on the specific conditions of different patients. The nursing staff is under the arrangement of the attending doctor. The staff provide corresponding nursing intervention and assist with the patient's condition. This help not only targets the disease itself but also comforts and guides the patient's psychology and spirit. In the course of illness, the disease will bring a series of changes to the patient's body and mind, and these changes will have a substantial impact on the patient's disease process, so it is assorted to help the patient recognize these problems and adapt to this change. Palpitations are a combination of subjective sensations and objective signs. Subjectively, the patient feels that the heart is beating rapidly, irregularly or forcefully, but also that the heart rate is too fast, too slow or too slow; dyspnoea, which is a combination of subjective sensations and objective signs. Subjectively, the patient feels that breathing is laboured, and objectively, the number of breaths is increased and the movements are fast and large causing cyanosis, which is a bluish colour on the mucous membranes and skin, and the absolute value of reduced haemoglobin in the body to be over 59% of unoxygenated haemoglobin .
General nursing work can mainly provide sanitary care and assistance to patients who have lost the ability to take care of themselves, including some daily life, such as washing hair and bathing, changing clothes, wiping the body, and cutting nails. Different parts have different care impacts . The specific methods are as follows: Shampoo and Shower . For patients who are unable to take care of themselves, nursing staff should help them wash their hair and take a bath and wipe them dry in time after washing to avoid cold aggravation. In addition, when washing their hair, the nurse should pay attention to the patient's complexion, breathing, and pulse. If there is any abnormality, the nurse should stop washing their hair immediately. When taking a bath, the method of bathing on the hospital bed should be generally used, the water temperature is to be controlled at about 40–45°C, and the bathing action must be as gentle as possible. Oral Care . Nursing staff should help patients brush their teeth after eating. Because the mouth is warm and moist and there are food residues after eating, it is easy to breed bacteria, so they need to brush their teeth frequently. For patients who wear dentures, they should be removed and rinsed with cold water. Nursing Care for Patients with Severe Coma . Nursing staff must strengthen the oral care of patients and turn over and wipe their back frequently to prevent bedsores from breeding. People with incontinence need to change their bed linen and clothing in time. Care for Children . Child patients have low immunity, weak resistance, susceptibility to diseases, and low awareness of cleaning. Therefore, nursing staff should always pay attention to and strengthen the cleaning of child patients, like washing hands and teeth before meals and after meals, taking a bath, and changing clothes frequently. Nursing Care for Elderly Patients . Due to the decline in various functions of the body of the elderly, the amount of activity is relatively small and the reaction is slow. Therefore, nursing staff should strengthen the skin care of the elderly to prevent bedsores.
With the progress of society and the further development of nursing work, general nursing work gradually can no longer meet the needs of different patients and, at this time, diversified nursing has emerged . Diversity nursing, as the name implies, has a variety of nursing methods and nursing skills and provides different and specific nursing assistance according to the different needs of different patients. Therefore, diversity nursing can also be called personalized or humanized nursing, which can maximize a full range of care given to the patient to enhance and assist the patient's recovery . As a new nursing concept, diversified nursing is a comprehensive nursing care that adds to the psychological and social changes and differences of patients on the basis of traditional nursing work. The focus of its work is different from the routine clinical diagnosis and treatment in medical treatment. It does not only provide health services for the patient's disease itself but also covers the physical health of the patient, the psychological emotions, and even the psychological emotions of the patient's family members. Each aspect involves the needs of the person as a whole . Psychological care is a method and means for nursing staff to influence and change the patient's psychological state and behavior by their behavior in their interactions with the patient to promote their recovery. To do a good job in psychological care of patients, it is important to understand and grasp the psychological activities of patients. Responses are made according to psychological reactions. Care in terms of physical health is to enhance interventions by taking care of the symptoms diagnosed by the doctor, keeping in mind the precautions to be taken. It can be seen that diversified nursing is a sublimation of traditional nursing work. It is different from the general clinical nursing process. The role of the nurse, which was limited to the needs of medical and health institutions, has been expanded to include hospital, community, and family services, and the workplace of the nurse has changed from the hospital to the community and family. Diversified nursing is targeted nursing based on the specific conditions of different patients. The nursing staff is under the arrangement of the attending doctor. The staff provide corresponding nursing intervention and assist with the patient's condition. This help not only targets the disease itself but also comforts and guides the patient's psychology and spirit. In the course of illness, the disease will bring a series of changes to the patient's body and mind, and these changes will have a substantial impact on the patient's disease process, so it is assorted to help the patient recognize these problems and adapt to this change. Palpitations are a combination of subjective sensations and objective signs. Subjectively, the patient feels that the heart is beating rapidly, irregularly or forcefully, but also that the heart rate is too fast, too slow or too slow; dyspnoea, which is a combination of subjective sensations and objective signs. Subjectively, the patient feels that breathing is laboured, and objectively, the number of breaths is increased and the movements are fast and large causing cyanosis, which is a bluish colour on the mucous membranes and skin, and the absolute value of reduced haemoglobin in the body to be over 59% of unoxygenated haemoglobin .
2.3.1. Routine Monitoring of the Condition Under normal circumstances, there is only one doctor in charge of a patient and the doctor sometimes needs to take care of multiple patients. At this time, the work of the nursing staff is very important. The nursing staff needs to pay attention to the patient's vital signs at all times and provide assistance when necessary. The doctor helps the patient to perform various physical examinations, and the doctor must be informed in time if there is any abnormality. Therefore, nurses also need to understand various medical equipment and instruments . With the development of modern medical technology, medical imaging technology and image data processing technology are becoming more and more common in clinical medicine. The use of these technical means can help doctors observe and understand the patient's condition and improve the efficiency of diagnosis and treatment and the recovery rate of the condition . For some related theories and algorithms of medical imaging technology and image data processing technology, this article also makes some brief introductions . In clinical diagnosis and treatment, doctors can take the patient's cell and blood information with the help of medical imaging equipment. By observing the patient's cell image, they can learn about the patient's condition and make treatment judgments. However, in order to improve the quality and effect of diagnosis and treatment, it is often necessary to perform further processing on the acquired patient's relevant cell images to improve the quality of the images and facilitate the observation and judgment of doctors. Computer image data processing technology is involved here. Generally, the processing of medical images includes image denoising and enhancement, registration, and feature extraction , specifically in the following aspects. (1) Denoising Enhancement. In the process of image acquisition, due to various interferences from the outside world, the image will appear to be noisy, so the image needs to be denoised and enhanced. Here, we use the weighted average method for processing. Image denoising is an operation that reduces the amount of useless or distracting useful information in an image. So, a good noise reduction operation requires an understanding of the sources of noise and noise characteristics. Image enhancement is the operation of highlighting the useful information in an image. Deeply skilled image enhancement will differentiate between target groups, understanding their visual background sensitivity preferences, light sensitivity, colour sensitivity, and sharpness. Assuming that the gray value of point ( x , y ) is A ( x , y ), take a 3 × 3 template, and the reciprocal of the gradient is (1) G x , y : i , j = 1 A x + i , y + j − A x , y . Among them, x and y represent the position of the point in the target image and i and j represent the sequence in the target image. The weight matrix formed by the reciprocal of the gray scale is (2) W = w i − 1. j − 1 w i − 1 , j w i − 1 , j + 1 w i , j − 1 w i , j w i , j + 1 w i + 1 , j − 1 w i + 1 , j − 1 w i + 1 , j + 1 . Here, w ( i , j )=(1/2) and the sum of other values is also (1/2); then, (3) w x + i , y + j = 1 2 G x , y : i , j ∑ i ∑ j G x , y : i , j . After that, the median filter method is used to denoise the image. The principle is to use a sliding window of length m with an odd number of points and replace the gray value of the center point pixel with the median value of the gray value of each point in the window value, expressed by (4) G i = Med A i − e , … , A i + e . Then, the dual-tree complex wavelet transform method is used to filter. The dual-tree complex wavelet is obtained on the basis of the complex wavelet, and a filter is added on the basis of one filter, so that the complex wavelet transform is in the two filters. The one-dimensional data transformation formula is (5) ψ t = ψ h t + i ψ g t . When the complex wavelet becomes a dual-tree complex wavelet, its two-dimensional data transformation formula is (6) ψ x , y = ψ x ψ y . The dual-tree complex wavelet has better effect in decomposition and reconstruction and is more conducive to the processing of image information details. (2) Registration. The purpose of image registration is to find a certain matching relationship so that the image to be registered can be matched with the source image. Common methods include rigid body transformation, affine transformation, projection transformation, and nonlinear transformation. First is the rigid body transformation. The principle is that the distance between the corresponding two points in the image before and after the transformation is unchanged and the formula is as follows: (7) y ′ = b 00 + b 10 x + b 01 y + b 20 x 2 + b 11 x y + b 02 y 2 + ⋯ . Here, ( x , y ) is the point in the image before transformation and ( x ′, y ′) is the point in the image after transformation. Second is affine transformation. It is a linear transformation, from two-dimensional coordinates to two-dimensional coordinates, after the conversion still maintains the flatness of the two-dimensional image, and the formula is (8) x ′ y ′ = a 11 a 12 a 21 a 22 x y + t x t y , where a 11 a 12 a 21 a 22 is a real matrix. Third is the projection transformation. It means that the nature of the line of the image has not changed before and after the image transformation, but the parallel relationship between the two has changed. This transformation is called projection transformation, and the formula is (9) x ′ y ′ = a 11 a 12 a 13 a 21 a 22 a 23 x y + t x t y , where a 11 a 12 a 13 a 21 a 22 a 23 is a real matrix. Fourth is nonlinear transformation. Nonlinear transformation is just the opposite of projection transformation. When the image is transformed, the straight line in the image is no longer a straight line. At this time, it is a nonlinear transformation. The formula is (10) x ′ , y ′ = F x , y . Here, F represents the mapping function after image transformation. Nonlinear transformation is common in two-dimensional space, and its polynomial function expression is (11) x ′ = a 00 + a 10 x + a 01 y + a 20 x 2 + a 11 x y + a 02 y 2 + ⋯ ⋯ , y ′ = b 00 + b 10 x + b 01 y + b 20 x 2 + b 11 x y + b 02 y 2 + ⋯ ⋯ . (3) Feature Extraction. The radient-based edge detection method is the first technique. Assume the two-dimensional Gaussian function G ( x , y ) and use it to smooth the image; the Gaussian function formula is (12) G x , y = 1 2 π σ 2 exp − x 2 + y 2 2 σ 2 . After that, the first-order partial derivative is used to calculate the value and direction of the gradient and the nonlimit large value suppression is performed on the gradient value. Finally, the double threshold algorithm is used to detect and connect the edges. The second technique is edge feature extraction method based on morphology. Extract the inner and outer edges of the target image, respectively, such as equations ( ) and ( ), and then use the knowledge of morphology for calculation: (13) e 1 A = A − A Θ B , (14) e 2 A = A ⊕ B − A . The third technique is the feature extraction method based on scale invariance. Set the scale space L ( x , y , σ ) of the target image to a Gaussian function G ( x , y , σ ) with varying scales, and convolve it with the original image I ( x , y ) to get (15) L x , y , σ = G x , y , σ ∗ I x , y . The calculation formula of Gaussian function is (16) G x , y , σ = 1 2 π σ 2 e − x − m / 2 2 + y − n / 2 2 / 2 σ 2 . Here, ( x , y ) represents the pixel position of the image, ∗ represents the convolution operation, m , n is the dimension of the Gaussian template, and σ represents the scale space factor. Then, determine the key points and directions. The feature descriptor is extracted at the key point, and the direction value is assigned to the feature vector of the key point. The calculation formulas of the gradient modulus value and the direction value are, respectively, (17) m x , y = L x + 1 , y − L x , y + 1 2 + L x , y + 1 − L x , y − 1 2 , θ x , y = tan − 1 L x , y + 1 − L x , y − 1 L x + 1 , y − L x − 1 , y . 2.3.2. Daily Nursing and Psychological Treatment of Patients There is a detailed introduction to the daily life and nursing of the patient, so we will not repeat it here. In addition, there is a psychological and spiritual guidance and treatment for the patient. Cardiology patients, especially those suffering from multiple complications at the same time, are under great psychological and spiritual pressure. At this time, nursing staff need to carry out more psychological relief and guidance for them, so that they can actively cooperate with the treatment. The condition will heal quickly. In addition, you can give them more health education about drugs and diet to help them popularize health knowledge. In addition, proper playing of soothing and relaxing music is also conducive to the relaxation of their mood and helps them release bad emotions .
Under normal circumstances, there is only one doctor in charge of a patient and the doctor sometimes needs to take care of multiple patients. At this time, the work of the nursing staff is very important. The nursing staff needs to pay attention to the patient's vital signs at all times and provide assistance when necessary. The doctor helps the patient to perform various physical examinations, and the doctor must be informed in time if there is any abnormality. Therefore, nurses also need to understand various medical equipment and instruments . With the development of modern medical technology, medical imaging technology and image data processing technology are becoming more and more common in clinical medicine. The use of these technical means can help doctors observe and understand the patient's condition and improve the efficiency of diagnosis and treatment and the recovery rate of the condition . For some related theories and algorithms of medical imaging technology and image data processing technology, this article also makes some brief introductions . In clinical diagnosis and treatment, doctors can take the patient's cell and blood information with the help of medical imaging equipment. By observing the patient's cell image, they can learn about the patient's condition and make treatment judgments. However, in order to improve the quality and effect of diagnosis and treatment, it is often necessary to perform further processing on the acquired patient's relevant cell images to improve the quality of the images and facilitate the observation and judgment of doctors. Computer image data processing technology is involved here. Generally, the processing of medical images includes image denoising and enhancement, registration, and feature extraction , specifically in the following aspects. (1) Denoising Enhancement. In the process of image acquisition, due to various interferences from the outside world, the image will appear to be noisy, so the image needs to be denoised and enhanced. Here, we use the weighted average method for processing. Image denoising is an operation that reduces the amount of useless or distracting useful information in an image. So, a good noise reduction operation requires an understanding of the sources of noise and noise characteristics. Image enhancement is the operation of highlighting the useful information in an image. Deeply skilled image enhancement will differentiate between target groups, understanding their visual background sensitivity preferences, light sensitivity, colour sensitivity, and sharpness. Assuming that the gray value of point ( x , y ) is A ( x , y ), take a 3 × 3 template, and the reciprocal of the gradient is (1) G x , y : i , j = 1 A x + i , y + j − A x , y . Among them, x and y represent the position of the point in the target image and i and j represent the sequence in the target image. The weight matrix formed by the reciprocal of the gray scale is (2) W = w i − 1. j − 1 w i − 1 , j w i − 1 , j + 1 w i , j − 1 w i , j w i , j + 1 w i + 1 , j − 1 w i + 1 , j − 1 w i + 1 , j + 1 . Here, w ( i , j )=(1/2) and the sum of other values is also (1/2); then, (3) w x + i , y + j = 1 2 G x , y : i , j ∑ i ∑ j G x , y : i , j . After that, the median filter method is used to denoise the image. The principle is to use a sliding window of length m with an odd number of points and replace the gray value of the center point pixel with the median value of the gray value of each point in the window value, expressed by (4) G i = Med A i − e , … , A i + e . Then, the dual-tree complex wavelet transform method is used to filter. The dual-tree complex wavelet is obtained on the basis of the complex wavelet, and a filter is added on the basis of one filter, so that the complex wavelet transform is in the two filters. The one-dimensional data transformation formula is (5) ψ t = ψ h t + i ψ g t . When the complex wavelet becomes a dual-tree complex wavelet, its two-dimensional data transformation formula is (6) ψ x , y = ψ x ψ y . The dual-tree complex wavelet has better effect in decomposition and reconstruction and is more conducive to the processing of image information details. (2) Registration. The purpose of image registration is to find a certain matching relationship so that the image to be registered can be matched with the source image. Common methods include rigid body transformation, affine transformation, projection transformation, and nonlinear transformation. First is the rigid body transformation. The principle is that the distance between the corresponding two points in the image before and after the transformation is unchanged and the formula is as follows: (7) y ′ = b 00 + b 10 x + b 01 y + b 20 x 2 + b 11 x y + b 02 y 2 + ⋯ . Here, ( x , y ) is the point in the image before transformation and ( x ′, y ′) is the point in the image after transformation. Second is affine transformation. It is a linear transformation, from two-dimensional coordinates to two-dimensional coordinates, after the conversion still maintains the flatness of the two-dimensional image, and the formula is (8) x ′ y ′ = a 11 a 12 a 21 a 22 x y + t x t y , where a 11 a 12 a 21 a 22 is a real matrix. Third is the projection transformation. It means that the nature of the line of the image has not changed before and after the image transformation, but the parallel relationship between the two has changed. This transformation is called projection transformation, and the formula is (9) x ′ y ′ = a 11 a 12 a 13 a 21 a 22 a 23 x y + t x t y , where a 11 a 12 a 13 a 21 a 22 a 23 is a real matrix. Fourth is nonlinear transformation. Nonlinear transformation is just the opposite of projection transformation. When the image is transformed, the straight line in the image is no longer a straight line. At this time, it is a nonlinear transformation. The formula is (10) x ′ , y ′ = F x , y . Here, F represents the mapping function after image transformation. Nonlinear transformation is common in two-dimensional space, and its polynomial function expression is (11) x ′ = a 00 + a 10 x + a 01 y + a 20 x 2 + a 11 x y + a 02 y 2 + ⋯ ⋯ , y ′ = b 00 + b 10 x + b 01 y + b 20 x 2 + b 11 x y + b 02 y 2 + ⋯ ⋯ . (3) Feature Extraction. The radient-based edge detection method is the first technique. Assume the two-dimensional Gaussian function G ( x , y ) and use it to smooth the image; the Gaussian function formula is (12) G x , y = 1 2 π σ 2 exp − x 2 + y 2 2 σ 2 . After that, the first-order partial derivative is used to calculate the value and direction of the gradient and the nonlimit large value suppression is performed on the gradient value. Finally, the double threshold algorithm is used to detect and connect the edges. The second technique is edge feature extraction method based on morphology. Extract the inner and outer edges of the target image, respectively, such as equations ( ) and ( ), and then use the knowledge of morphology for calculation: (13) e 1 A = A − A Θ B , (14) e 2 A = A ⊕ B − A . The third technique is the feature extraction method based on scale invariance. Set the scale space L ( x , y , σ ) of the target image to a Gaussian function G ( x , y , σ ) with varying scales, and convolve it with the original image I ( x , y ) to get (15) L x , y , σ = G x , y , σ ∗ I x , y . The calculation formula of Gaussian function is (16) G x , y , σ = 1 2 π σ 2 e − x − m / 2 2 + y − n / 2 2 / 2 σ 2 . Here, ( x , y ) represents the pixel position of the image, ∗ represents the convolution operation, m , n is the dimension of the Gaussian template, and σ represents the scale space factor. Then, determine the key points and directions. The feature descriptor is extracted at the key point, and the direction value is assigned to the feature vector of the key point. The calculation formulas of the gradient modulus value and the direction value are, respectively, (17) m x , y = L x + 1 , y − L x , y + 1 2 + L x , y + 1 − L x , y − 1 2 , θ x , y = tan − 1 L x , y + 1 − L x , y − 1 L x + 1 , y − L x − 1 , y .
There is a detailed introduction to the daily life and nursing of the patient, so we will not repeat it here. In addition, there is a psychological and spiritual guidance and treatment for the patient. Cardiology patients, especially those suffering from multiple complications at the same time, are under great psychological and spiritual pressure. At this time, nursing staff need to carry out more psychological relief and guidance for them, so that they can actively cooperate with the treatment. The condition will heal quickly. In addition, you can give them more health education about drugs and diet to help them popularize health knowledge. In addition, proper playing of soothing and relaxing music is also conducive to the relaxation of their mood and helps them release bad emotions .
3.1. Subjects and General Information This experiment selected 300 patients admitted to the cardiology department of our hospital in 2019 as examples, including patients with hypertension, diabetes, coronary heart disease, and angina pectoris and patients with various complications. According to different treatment methods, they are divided into experimental group and control group. The control group adopts general conventional treatment and nursing, and the experimental group adopts diversified nursing, with 150 people in each group. Among the experimental group, there were 35 patients with hypertension (A1), 46 patients with diabetes (B1), 28 patients with coronary heart disease (C1), 24 patients with angina pectoris (D1), and 17 patients with multiple complications (E1); the average age was 61.23 ± 3.45, 63.12 ± 5.17, 59.45 ± 3.63, 58.22 ± 5.78, and 64.59 ± 4.18; the male to female ratios were 21/14, 28/18, 13/15, 11/13, and 8/9; the average disease age was 9.13 ± 3.34, 11.56 ± 4.14, 16.23 ± 3.46, 13.57 ± 3.74, and 19.08 ± 2.54, respectively. In the control group, 30 were hypertension patients (A2), 47 were diabetic patients (B2), 39 were coronary heart disease patients (C2), 21 were angina patients (D2), and 13 were patients with multiple complications (E2); the average age was 57.23 ± 4.15, 63.14 ± 3.19, 58.45 ± 5.63, 68.21 ± 2.78, and 63.53 ± 4.46; the male to female ratios were 19/11, 31/16, 17/22, 11/10, and 6/7; the average disease age was 11.15 ± 4.38, 13.51 ± 2.14, 9.53 ± 5.47, 21.54 ± 3.61, and 16.08 ± 3.64. The specific conditions of the patients are shown in . 3.2. Experimental Method The two groups of cardiology patients were treated with a consistent treatment plan. The control group received routine care, and the specific care items were referred to the previous content; the experimental group implemented diversified care on the basis of routine care, specifically including the following tasks. 3.2.1. Health Education Cardiology patients generally have a long illness, and the treatment process is also very long, requiring gradual and orderly progress. Therefore, the patient must have a full knowledge and understanding of his condition, so the nursing staff needs to explain the basic knowledge of the disease to the patient in detail, emphasize the effect of their mental state on the condition of the disease, urge them to maintain a positive and good attitude, and cooperate with treatment and nursing. Nursing staff can implement one-to-one explanations and education according to the age, education level, and cognition of different patients, so that each patient can have a full understanding of their own condition, allowing patients to feel themselves receiving attention and concern and improving their enthusiasm and confidence in treatment. 3.2.2. Medication Guidance Nursing staff should patiently explain to the patient and the patient's family the curative effect and effect of the relevant drugs, the method of administration, and precautions during the treatment process, instruct the patient to take the drug correctly, inform the patient which drugs may cause adverse reactions or side effects and how to effectively control, encourage the patient, and instruct family members to assist and supervise the patient's reasonable and regular medication. 3.2.3. Diet Guidance Cardiology diseases are largely related to the daily eating habits of patients. Excessive intake of high-fat, high-sugar, and high-oil foods can easily cause changes in blood lipids in the body and abnormal blood lipids. Therefore, during the treatment period, we must pay more attention to and control the patient's diet. According to the patient's condition, personal preferences and past eating habits assist the patient to formulate a scientific diet plan and, at the same time, supervise the patient to perform appropriate exercise and control weight. Encourage patients to eat seven to eight percent full at every meal, and not overeating. For patients with diabetes, it is recommended that they do not eat sugary foods. Ask the patient to drink plenty of water, and try to eat digestible foods to avoid constipation. 3.2.4. Intervention in Daily Life Behavior During the nursing process, the nurses should urge patients to avoid smoking and drinking as much as possible and encourage patients to arrange work and life reasonably, minimize stress, keep the body and mind relaxed and happy, avoid overwork and stress, develop good work and rest habits, and not to stay up late. At the same time, patients are encouraged to participate in some simple physical exercise activities, such as Tai Chi, walking, and square dance and avoid strenuous activities or physical activities; if there is any discomfort during exercise, the activity must be stopped immediately. 3.2.5. Follow-Up Intervention For patients who have been discharged from our hospital, nurses must establish an effective follow-up method, agree with the patient a return visit time, often keep in touch with the patient, pay attention to the condition and physical changes, and often ask the patient about the medication, diet, and exercise status. The should also urge patients to make corresponding adjustments in time. 3.3. Experimental Observation Indicators 3.3.1. The Patient's Blood Glucose Control Before the nursing intervention and 2 months after the nursing intervention, the two groups of patients were tested for their blood glucose levels under fasting conditions and 2 hours after meals. The improvement and control effect of blood glucose on each group of patients were judged according to the level of blood glucose. The blood sugar being stable and maintained at the normal level was considered superior; compared with before the intervention, the blood sugar decreased but not reached the normal level was considered good; and after the intervention, when the blood sugar has not improved significantly or even increased, it was considered poor, as shown in . 3.3.2. The Management Effect of the Patient's Condition Comprehensive assessment is carried out using the disease management indicators independently designed by our hospital, including the patient's awareness of the disease, the improvement of the disease, the patient's emotional state, the patient's medication compliance, and the patient's behavior. With a full score of 10, the higher the score, the better the effect of disease management. 3.3.3. Self-Evaluation of Patients' Psychological Status The patient's state of mind assessment includes before and after nursing intervention, the patients themselves will be assessed in the form of questionnaires, and then we will compare and analyze the self-evaluation of each group of patients according to domestic normal reference values. Self-evaluation indicators include interpersonal relationship, obsessive-compulsive disorder, depression, anxiety, paranoia, hostility, psychosis, etc., and calculate the statistical difference between the patient's self-evaluation results and the domestic reference value. P < 0.05 represents statistical difference, P < 0.01 means there is a significant statistical difference, P > 0.05 means there is no statistical difference. 3.3.4. Improvement of Patients' Quality of Life After nursing intervention, the improvement of the quality of life of the two groups of patients was compared, and the observation indicators included the patients' mental state, ability of daily living, physical function of the body, physical pain, self-feeling, and general health. The full score is 100 points. The higher the score, the better the improvement effect. The scores of each group are counted. 3.3.5. Patient's Satisfaction with Nursing Work and Quality In the form of questionnaires, the patient's satisfaction with nursing work and quality of care was investigated. Four items were set as very satisfied, relatively satisfied, dissatisfied and very dissatisfied, and the proportion of satisfaction of the two groups of patients was counted.
This experiment selected 300 patients admitted to the cardiology department of our hospital in 2019 as examples, including patients with hypertension, diabetes, coronary heart disease, and angina pectoris and patients with various complications. According to different treatment methods, they are divided into experimental group and control group. The control group adopts general conventional treatment and nursing, and the experimental group adopts diversified nursing, with 150 people in each group. Among the experimental group, there were 35 patients with hypertension (A1), 46 patients with diabetes (B1), 28 patients with coronary heart disease (C1), 24 patients with angina pectoris (D1), and 17 patients with multiple complications (E1); the average age was 61.23 ± 3.45, 63.12 ± 5.17, 59.45 ± 3.63, 58.22 ± 5.78, and 64.59 ± 4.18; the male to female ratios were 21/14, 28/18, 13/15, 11/13, and 8/9; the average disease age was 9.13 ± 3.34, 11.56 ± 4.14, 16.23 ± 3.46, 13.57 ± 3.74, and 19.08 ± 2.54, respectively. In the control group, 30 were hypertension patients (A2), 47 were diabetic patients (B2), 39 were coronary heart disease patients (C2), 21 were angina patients (D2), and 13 were patients with multiple complications (E2); the average age was 57.23 ± 4.15, 63.14 ± 3.19, 58.45 ± 5.63, 68.21 ± 2.78, and 63.53 ± 4.46; the male to female ratios were 19/11, 31/16, 17/22, 11/10, and 6/7; the average disease age was 11.15 ± 4.38, 13.51 ± 2.14, 9.53 ± 5.47, 21.54 ± 3.61, and 16.08 ± 3.64. The specific conditions of the patients are shown in .
The two groups of cardiology patients were treated with a consistent treatment plan. The control group received routine care, and the specific care items were referred to the previous content; the experimental group implemented diversified care on the basis of routine care, specifically including the following tasks. 3.2.1. Health Education Cardiology patients generally have a long illness, and the treatment process is also very long, requiring gradual and orderly progress. Therefore, the patient must have a full knowledge and understanding of his condition, so the nursing staff needs to explain the basic knowledge of the disease to the patient in detail, emphasize the effect of their mental state on the condition of the disease, urge them to maintain a positive and good attitude, and cooperate with treatment and nursing. Nursing staff can implement one-to-one explanations and education according to the age, education level, and cognition of different patients, so that each patient can have a full understanding of their own condition, allowing patients to feel themselves receiving attention and concern and improving their enthusiasm and confidence in treatment. 3.2.2. Medication Guidance Nursing staff should patiently explain to the patient and the patient's family the curative effect and effect of the relevant drugs, the method of administration, and precautions during the treatment process, instruct the patient to take the drug correctly, inform the patient which drugs may cause adverse reactions or side effects and how to effectively control, encourage the patient, and instruct family members to assist and supervise the patient's reasonable and regular medication. 3.2.3. Diet Guidance Cardiology diseases are largely related to the daily eating habits of patients. Excessive intake of high-fat, high-sugar, and high-oil foods can easily cause changes in blood lipids in the body and abnormal blood lipids. Therefore, during the treatment period, we must pay more attention to and control the patient's diet. According to the patient's condition, personal preferences and past eating habits assist the patient to formulate a scientific diet plan and, at the same time, supervise the patient to perform appropriate exercise and control weight. Encourage patients to eat seven to eight percent full at every meal, and not overeating. For patients with diabetes, it is recommended that they do not eat sugary foods. Ask the patient to drink plenty of water, and try to eat digestible foods to avoid constipation. 3.2.4. Intervention in Daily Life Behavior During the nursing process, the nurses should urge patients to avoid smoking and drinking as much as possible and encourage patients to arrange work and life reasonably, minimize stress, keep the body and mind relaxed and happy, avoid overwork and stress, develop good work and rest habits, and not to stay up late. At the same time, patients are encouraged to participate in some simple physical exercise activities, such as Tai Chi, walking, and square dance and avoid strenuous activities or physical activities; if there is any discomfort during exercise, the activity must be stopped immediately. 3.2.5. Follow-Up Intervention For patients who have been discharged from our hospital, nurses must establish an effective follow-up method, agree with the patient a return visit time, often keep in touch with the patient, pay attention to the condition and physical changes, and often ask the patient about the medication, diet, and exercise status. The should also urge patients to make corresponding adjustments in time.
Cardiology patients generally have a long illness, and the treatment process is also very long, requiring gradual and orderly progress. Therefore, the patient must have a full knowledge and understanding of his condition, so the nursing staff needs to explain the basic knowledge of the disease to the patient in detail, emphasize the effect of their mental state on the condition of the disease, urge them to maintain a positive and good attitude, and cooperate with treatment and nursing. Nursing staff can implement one-to-one explanations and education according to the age, education level, and cognition of different patients, so that each patient can have a full understanding of their own condition, allowing patients to feel themselves receiving attention and concern and improving their enthusiasm and confidence in treatment.
Nursing staff should patiently explain to the patient and the patient's family the curative effect and effect of the relevant drugs, the method of administration, and precautions during the treatment process, instruct the patient to take the drug correctly, inform the patient which drugs may cause adverse reactions or side effects and how to effectively control, encourage the patient, and instruct family members to assist and supervise the patient's reasonable and regular medication.
Cardiology diseases are largely related to the daily eating habits of patients. Excessive intake of high-fat, high-sugar, and high-oil foods can easily cause changes in blood lipids in the body and abnormal blood lipids. Therefore, during the treatment period, we must pay more attention to and control the patient's diet. According to the patient's condition, personal preferences and past eating habits assist the patient to formulate a scientific diet plan and, at the same time, supervise the patient to perform appropriate exercise and control weight. Encourage patients to eat seven to eight percent full at every meal, and not overeating. For patients with diabetes, it is recommended that they do not eat sugary foods. Ask the patient to drink plenty of water, and try to eat digestible foods to avoid constipation.
During the nursing process, the nurses should urge patients to avoid smoking and drinking as much as possible and encourage patients to arrange work and life reasonably, minimize stress, keep the body and mind relaxed and happy, avoid overwork and stress, develop good work and rest habits, and not to stay up late. At the same time, patients are encouraged to participate in some simple physical exercise activities, such as Tai Chi, walking, and square dance and avoid strenuous activities or physical activities; if there is any discomfort during exercise, the activity must be stopped immediately.
For patients who have been discharged from our hospital, nurses must establish an effective follow-up method, agree with the patient a return visit time, often keep in touch with the patient, pay attention to the condition and physical changes, and often ask the patient about the medication, diet, and exercise status. The should also urge patients to make corresponding adjustments in time.
3.3.1. The Patient's Blood Glucose Control Before the nursing intervention and 2 months after the nursing intervention, the two groups of patients were tested for their blood glucose levels under fasting conditions and 2 hours after meals. The improvement and control effect of blood glucose on each group of patients were judged according to the level of blood glucose. The blood sugar being stable and maintained at the normal level was considered superior; compared with before the intervention, the blood sugar decreased but not reached the normal level was considered good; and after the intervention, when the blood sugar has not improved significantly or even increased, it was considered poor, as shown in . 3.3.2. The Management Effect of the Patient's Condition Comprehensive assessment is carried out using the disease management indicators independently designed by our hospital, including the patient's awareness of the disease, the improvement of the disease, the patient's emotional state, the patient's medication compliance, and the patient's behavior. With a full score of 10, the higher the score, the better the effect of disease management. 3.3.3. Self-Evaluation of Patients' Psychological Status The patient's state of mind assessment includes before and after nursing intervention, the patients themselves will be assessed in the form of questionnaires, and then we will compare and analyze the self-evaluation of each group of patients according to domestic normal reference values. Self-evaluation indicators include interpersonal relationship, obsessive-compulsive disorder, depression, anxiety, paranoia, hostility, psychosis, etc., and calculate the statistical difference between the patient's self-evaluation results and the domestic reference value. P < 0.05 represents statistical difference, P < 0.01 means there is a significant statistical difference, P > 0.05 means there is no statistical difference. 3.3.4. Improvement of Patients' Quality of Life After nursing intervention, the improvement of the quality of life of the two groups of patients was compared, and the observation indicators included the patients' mental state, ability of daily living, physical function of the body, physical pain, self-feeling, and general health. The full score is 100 points. The higher the score, the better the improvement effect. The scores of each group are counted. 3.3.5. Patient's Satisfaction with Nursing Work and Quality In the form of questionnaires, the patient's satisfaction with nursing work and quality of care was investigated. Four items were set as very satisfied, relatively satisfied, dissatisfied and very dissatisfied, and the proportion of satisfaction of the two groups of patients was counted.
Before the nursing intervention and 2 months after the nursing intervention, the two groups of patients were tested for their blood glucose levels under fasting conditions and 2 hours after meals. The improvement and control effect of blood glucose on each group of patients were judged according to the level of blood glucose. The blood sugar being stable and maintained at the normal level was considered superior; compared with before the intervention, the blood sugar decreased but not reached the normal level was considered good; and after the intervention, when the blood sugar has not improved significantly or even increased, it was considered poor, as shown in .
Comprehensive assessment is carried out using the disease management indicators independently designed by our hospital, including the patient's awareness of the disease, the improvement of the disease, the patient's emotional state, the patient's medication compliance, and the patient's behavior. With a full score of 10, the higher the score, the better the effect of disease management.
The patient's state of mind assessment includes before and after nursing intervention, the patients themselves will be assessed in the form of questionnaires, and then we will compare and analyze the self-evaluation of each group of patients according to domestic normal reference values. Self-evaluation indicators include interpersonal relationship, obsessive-compulsive disorder, depression, anxiety, paranoia, hostility, psychosis, etc., and calculate the statistical difference between the patient's self-evaluation results and the domestic reference value. P < 0.05 represents statistical difference, P < 0.01 means there is a significant statistical difference, P > 0.05 means there is no statistical difference.
After nursing intervention, the improvement of the quality of life of the two groups of patients was compared, and the observation indicators included the patients' mental state, ability of daily living, physical function of the body, physical pain, self-feeling, and general health. The full score is 100 points. The higher the score, the better the improvement effect. The scores of each group are counted.
In the form of questionnaires, the patient's satisfaction with nursing work and quality of care was investigated. Four items were set as very satisfied, relatively satisfied, dissatisfied and very dissatisfied, and the proportion of satisfaction of the two groups of patients was counted.
4.1. Comparison of the General Conditions of the Two Groups of Patients Collect the general data of the two groups of patients, study the difference in disease type, average age, average age of disease, and gender of the two groups of patients on their rehabilitation effect, and analyze the statistical differences between the two groups of patients. The situation is shown in . According to , both the experimental group and the control group have the largest number of diabetic patients. The experimental group has 46 cases and the control group has 47 cases, followed by hypertension and coronary heart disease. The experimental group has 35 cases and 28 cases. The control group is respectively. There was no statistical difference between 30 cases and 39 cases between the two groups ( P > 0.05). The average age of the two groups of patients was around 60 years old, and the gender difference between men and women was not significant ( P > 0.05). In terms of age of illness, both groups showed patients who were older in actual age, and their age was often older. Among them, patients with angina and various complications were the most obvious. Overall, the difference in general information between the two groups of patients has no significant impact on the rehabilitation effect of the disease ( P > 0.05). 4.2. Comparison of Blood Glucose Control between the Two Groups In the experiment, we measured the blood glucose indexes of the two groups of patients before the nursing intervention and 2 months after the nursing intervention, and when measuring, they were measured once on an empty stomach and 2 hours after a meal. The measurement results are shown in . It can be seen from that after the nursing intervention, the blood sugar level of the two groups of patients was significantly lower than before and the blood sugar content during fasting was also lower than after meals. In order to more clearly compare the blood glucose indicators of each group of patients before and after the intervention, the results are shown in Figures and . It can be seen from that the blood glucose levels of the two groups of patients measured on an empty stomach before the nursing intervention were both about 10 mmol/L. After 2 hours of eating, the blood glucose levels increased. Among them, the C2 and E2 groups had the highest blood glucose levels, respectively, reaching 15.01 ± 0.34 and 13.07 ± 0.34. It can be seen from that after the nursing intervention, the blood glucose levels of the patients in the experimental group decreased significantly. Among them, the C1 and D1 groups decreased the most. The blood glucose levels measured 2 hours after the meal decreased by about 4 mmol, respectively, while the blood glucose levels of the patients in the control group were basically unchanged and even the blood glucose levels of the patients in the C2 group were increased 2 hours after the meal. This shows that nursing intervention has obvious effects on the patient's blood glucose control. 4.3. Comparison of the Management Effect of the Two Groups of Patients The condition management of the two groups of patients is statistically compared, and the effects of nursing intervention are analyzed, including the patient's awareness of the disease, the improvement of the condition, the patient's emotional state, the patient's medication compliance, and the patient's behavior. The full score is 10 points, and the statistical results are shown in and . From and , it can be seen that the scores of the experimented patients were significantly higher than those of the control group, indicating that after nursing intervention, the experimental group's patients' awareness of their own disease, improvement of their condition, mood stability, medication compliance, and daily routine of the behavioral habits and performance of the patients has been significantly improved and the scores of each index of the patients in the experimental group were between 8 and 10 points. In contrast, patients in the control group scored lower, generally around 5–7, which shows that the management effect of their condition is not as good as the experimental group. 4.4. Comparison of Self-Assessment of the Psychological Status of the Two Groups of Patients Studies have found that cardiology patients are not only physically suffering from huge illnesses but also psychologically and mentally under tremendous pressure during treatment. Mental stress includes poor sleep quality, oral pain, too little information, perceptual overload or deprivation, vague and uncertain goals, difficulty in carrying out agreed goals, time constraints or waiting, difficulty in choosing or lack of choice, and impaired cognitive function. Therefore, treatment should also pay attention to the psychological and spiritual care of patients. In the experiment, we asked patients to evaluate their mental state before and after nursing intervention in the form of questionnaires. The self-evaluation indicators include interpersonal relationship, obsessive-compulsive disorder, depression, anxiety, paranoia, hostility, psychosis, etc., and are compared with domestic reference values. Statistical comparisons are made (the domestic reference value is represented by W). 4.4.1. Comparison of Self-Evaluation of the Two Groups of Patients before Nursing Intervention The self-evaluation results of the two groups of patients before the nursing intervention and their comparison with the domestic reference values are shown in and . It can be seen from and that before the nursing intervention, the self-evaluation scores of the two groups of patients were higher than the domestic normal reference value and there was a significant statistical difference between the two ( P < 0.05), while the self-evaluation scores of the experimental group and the control group are not much different, and there was no statistical difference between the two groups ( P > 0.05). 4.4.2. Comparison of Self-Evaluation between the Two Groups of Patients after Nursing Intervention The self-evaluation results of the two groups of patients after nursing intervention and their comparison with domestic reference values are shown in . It can be seen from and that after the implementation of nursing intervention, the self-evaluation scores of the two groups of patients were significantly different from the domestic reference value ( P < 0.05) and each score was higher than the domestic reference value; compared with the control group, the mental state of the patients was significantly improved after the nursing intervention in the experiment group, such as anxiety, depression, hostility, obsessive-compulsive disorder, paranoia, and psychosis while the control group had no scores on various mental state indicators before and after the intervention so there is a very significant statistical difference between the experimental group and the control group ( P < 0.01). According to the graphical data, it can be shown that nursing interventions can be of great benefit to patients, not only in terms of relieving physical discomfort but also psychological trauma and that the treatment is more effective and can be widely implemented. 4.5. Comparison of the Improvement of the Quality of Life of the Two Groups of Patients The improvement of the quality of life of the two groups of patients before and after the nursing intervention was compared. The observation indicators included the patients' mental state, ability of daily living, physical function of the body, physical pain, self-feeling, and general health. The results are shown in and . It can be seen from and that after the nursing intervention, the quality-of-life evaluation scores of the experimental group were generally higher than those of the control group. In terms of mental state, physical function, and physical pain, both groups of patients had lower scores between 14 and 35, but compared with the two groups, the experimental group performed better. This shows that the quality of life of patients in the experimental group improved more significantly than the control group ( P < 0.05). The scores of the other two groups of patients on the ability of daily living, self-perception, and general health are quite different. The scores of the control group are about 22–23 on the ability of daily living, and the scores on the perception of self are about 36–38. The score on health is around 42–45 points; on the other hand, the scores of the three items in the experimental group are around 46–70 points, indicating that the physical condition of the patients in the experimental group has improved significantly after nursing intervention. 4.6. Comparison of Satisfaction between the Two Groups of Patients with Nursing Work and Quality The satisfaction of the two groups of patients with the quality of nursing work during the treatment was calculated, the differences between the two were compared, and the impact of nursing intervention on the treatment of cardiology patients was analyzed. The results are shown in and . It can be seen from and that patients in the experimental group are obviously more satisfied with nursing work and quality. Among them, 43.33% are very satisfied, 48.67% are relatively satisfied, and 92% are overall satisfied; dissatisfied accounted for only 6.66%, very dissatisfied accounted for 1.34%, and overall dissatisfied accounted for 8%. In the control group, 12.67% were very satisfied, 32% were relatively satisfied, and 44.67% were generally satisfied; 39.67% were dissatisfied, 15.33% were very dissatisfied, and 55% were generally dissatisfied. It can be seen that the overall satisfaction of the experimental group of patients with nursing intervention is higher, reflecting that nursing intervention has had a great positive effect on the treatment of cardiology patients. The data from the questionnaire shows that diversified care is clearly more helpful to cardiac patients than general care, a situation which, if replicated, could increase satisfaction in the care sector and ease the doctor-patient relationship.
Collect the general data of the two groups of patients, study the difference in disease type, average age, average age of disease, and gender of the two groups of patients on their rehabilitation effect, and analyze the statistical differences between the two groups of patients. The situation is shown in . According to , both the experimental group and the control group have the largest number of diabetic patients. The experimental group has 46 cases and the control group has 47 cases, followed by hypertension and coronary heart disease. The experimental group has 35 cases and 28 cases. The control group is respectively. There was no statistical difference between 30 cases and 39 cases between the two groups ( P > 0.05). The average age of the two groups of patients was around 60 years old, and the gender difference between men and women was not significant ( P > 0.05). In terms of age of illness, both groups showed patients who were older in actual age, and their age was often older. Among them, patients with angina and various complications were the most obvious. Overall, the difference in general information between the two groups of patients has no significant impact on the rehabilitation effect of the disease ( P > 0.05).
In the experiment, we measured the blood glucose indexes of the two groups of patients before the nursing intervention and 2 months after the nursing intervention, and when measuring, they were measured once on an empty stomach and 2 hours after a meal. The measurement results are shown in . It can be seen from that after the nursing intervention, the blood sugar level of the two groups of patients was significantly lower than before and the blood sugar content during fasting was also lower than after meals. In order to more clearly compare the blood glucose indicators of each group of patients before and after the intervention, the results are shown in Figures and . It can be seen from that the blood glucose levels of the two groups of patients measured on an empty stomach before the nursing intervention were both about 10 mmol/L. After 2 hours of eating, the blood glucose levels increased. Among them, the C2 and E2 groups had the highest blood glucose levels, respectively, reaching 15.01 ± 0.34 and 13.07 ± 0.34. It can be seen from that after the nursing intervention, the blood glucose levels of the patients in the experimental group decreased significantly. Among them, the C1 and D1 groups decreased the most. The blood glucose levels measured 2 hours after the meal decreased by about 4 mmol, respectively, while the blood glucose levels of the patients in the control group were basically unchanged and even the blood glucose levels of the patients in the C2 group were increased 2 hours after the meal. This shows that nursing intervention has obvious effects on the patient's blood glucose control.
The condition management of the two groups of patients is statistically compared, and the effects of nursing intervention are analyzed, including the patient's awareness of the disease, the improvement of the condition, the patient's emotional state, the patient's medication compliance, and the patient's behavior. The full score is 10 points, and the statistical results are shown in and . From and , it can be seen that the scores of the experimented patients were significantly higher than those of the control group, indicating that after nursing intervention, the experimental group's patients' awareness of their own disease, improvement of their condition, mood stability, medication compliance, and daily routine of the behavioral habits and performance of the patients has been significantly improved and the scores of each index of the patients in the experimental group were between 8 and 10 points. In contrast, patients in the control group scored lower, generally around 5–7, which shows that the management effect of their condition is not as good as the experimental group.
Studies have found that cardiology patients are not only physically suffering from huge illnesses but also psychologically and mentally under tremendous pressure during treatment. Mental stress includes poor sleep quality, oral pain, too little information, perceptual overload or deprivation, vague and uncertain goals, difficulty in carrying out agreed goals, time constraints or waiting, difficulty in choosing or lack of choice, and impaired cognitive function. Therefore, treatment should also pay attention to the psychological and spiritual care of patients. In the experiment, we asked patients to evaluate their mental state before and after nursing intervention in the form of questionnaires. The self-evaluation indicators include interpersonal relationship, obsessive-compulsive disorder, depression, anxiety, paranoia, hostility, psychosis, etc., and are compared with domestic reference values. Statistical comparisons are made (the domestic reference value is represented by W). 4.4.1. Comparison of Self-Evaluation of the Two Groups of Patients before Nursing Intervention The self-evaluation results of the two groups of patients before the nursing intervention and their comparison with the domestic reference values are shown in and . It can be seen from and that before the nursing intervention, the self-evaluation scores of the two groups of patients were higher than the domestic normal reference value and there was a significant statistical difference between the two ( P < 0.05), while the self-evaluation scores of the experimental group and the control group are not much different, and there was no statistical difference between the two groups ( P > 0.05). 4.4.2. Comparison of Self-Evaluation between the Two Groups of Patients after Nursing Intervention The self-evaluation results of the two groups of patients after nursing intervention and their comparison with domestic reference values are shown in . It can be seen from and that after the implementation of nursing intervention, the self-evaluation scores of the two groups of patients were significantly different from the domestic reference value ( P < 0.05) and each score was higher than the domestic reference value; compared with the control group, the mental state of the patients was significantly improved after the nursing intervention in the experiment group, such as anxiety, depression, hostility, obsessive-compulsive disorder, paranoia, and psychosis while the control group had no scores on various mental state indicators before and after the intervention so there is a very significant statistical difference between the experimental group and the control group ( P < 0.01). According to the graphical data, it can be shown that nursing interventions can be of great benefit to patients, not only in terms of relieving physical discomfort but also psychological trauma and that the treatment is more effective and can be widely implemented.
The self-evaluation results of the two groups of patients before the nursing intervention and their comparison with the domestic reference values are shown in and . It can be seen from and that before the nursing intervention, the self-evaluation scores of the two groups of patients were higher than the domestic normal reference value and there was a significant statistical difference between the two ( P < 0.05), while the self-evaluation scores of the experimental group and the control group are not much different, and there was no statistical difference between the two groups ( P > 0.05).
The self-evaluation results of the two groups of patients after nursing intervention and their comparison with domestic reference values are shown in . It can be seen from and that after the implementation of nursing intervention, the self-evaluation scores of the two groups of patients were significantly different from the domestic reference value ( P < 0.05) and each score was higher than the domestic reference value; compared with the control group, the mental state of the patients was significantly improved after the nursing intervention in the experiment group, such as anxiety, depression, hostility, obsessive-compulsive disorder, paranoia, and psychosis while the control group had no scores on various mental state indicators before and after the intervention so there is a very significant statistical difference between the experimental group and the control group ( P < 0.01). According to the graphical data, it can be shown that nursing interventions can be of great benefit to patients, not only in terms of relieving physical discomfort but also psychological trauma and that the treatment is more effective and can be widely implemented.
The improvement of the quality of life of the two groups of patients before and after the nursing intervention was compared. The observation indicators included the patients' mental state, ability of daily living, physical function of the body, physical pain, self-feeling, and general health. The results are shown in and . It can be seen from and that after the nursing intervention, the quality-of-life evaluation scores of the experimental group were generally higher than those of the control group. In terms of mental state, physical function, and physical pain, both groups of patients had lower scores between 14 and 35, but compared with the two groups, the experimental group performed better. This shows that the quality of life of patients in the experimental group improved more significantly than the control group ( P < 0.05). The scores of the other two groups of patients on the ability of daily living, self-perception, and general health are quite different. The scores of the control group are about 22–23 on the ability of daily living, and the scores on the perception of self are about 36–38. The score on health is around 42–45 points; on the other hand, the scores of the three items in the experimental group are around 46–70 points, indicating that the physical condition of the patients in the experimental group has improved significantly after nursing intervention.
The satisfaction of the two groups of patients with the quality of nursing work during the treatment was calculated, the differences between the two were compared, and the impact of nursing intervention on the treatment of cardiology patients was analyzed. The results are shown in and . It can be seen from and that patients in the experimental group are obviously more satisfied with nursing work and quality. Among them, 43.33% are very satisfied, 48.67% are relatively satisfied, and 92% are overall satisfied; dissatisfied accounted for only 6.66%, very dissatisfied accounted for 1.34%, and overall dissatisfied accounted for 8%. In the control group, 12.67% were very satisfied, 32% were relatively satisfied, and 44.67% were generally satisfied; 39.67% were dissatisfied, 15.33% were very dissatisfied, and 55% were generally dissatisfied. It can be seen that the overall satisfaction of the experimental group of patients with nursing intervention is higher, reflecting that nursing intervention has had a great positive effect on the treatment of cardiology patients. The data from the questionnaire shows that diversified care is clearly more helpful to cardiac patients than general care, a situation which, if replicated, could increase satisfaction in the care sector and ease the doctor-patient relationship.
High-risk cardiology diseases such as high blood pressure, diabetes, and coronary heart disease have brought great harm to people's health and life, and in recent years, this type of disease has a tendency to affect younger people. In order to protect people's health and improve the quality of life of patients, research on the treatment of cardiology diseases is very urgent and important. In the process of treating cardiology diseases, general diagnosis and treatment are certainly very important, but during this period, it is also essential for patient care. Effective care can help patients improve their condition and relieve pain to a large extent. However, traditional nursing work has gradually ceased to meet the diverse needs of different patients, and diversified care with the characteristics of humanization and comprehensive care has emerged. This study uses examples to study the effects of diversified nursing in improving patients' blood sugar control, disease management, mental state, and quality of life. The results show that the experimental group of patients who have implemented diversified nursing has more obvious blood sugar control effects. In terms of self-evaluation of mental state, the two groups showed great differences ( P < 0.01). The mental state of the experimental group was better, and the improvement of the quality of life was also better in the experimental group. The patients in the experimental group had the highest scores, and the effect was more obvious. Finally, the overall satisfaction of the patients in the experimental group with nursing work reached 92%, indicating that diversified nursing has a more positive impact on the rehabilitation of patients than traditional nursing in the treatment of cardiology.
|
Management of foot health in people with inflammatory arthritis: British Society for Rheumatology guideline scope | 26ac0277-1dea-457a-b439-277684ac6029 | 9536780 | Internal Medicine[mh] | The guideline will be developed using the methods and processes outlined in Creating Clinical Guidelines: Our Protocol . This development process to produce guidance, advice and recommendations for practice has National Institute for Health and Care Excellence (NICE) accreditation.
Foot problems are highly prevalent in adults, children and young people with inflammatory arthritis (IA), but their burden is often underestimated by health professionals. People with IA are often frustrated that concerns relating to their feet are ignored . There are existing guidelines for foot health that may be applicable to IA. A regional podiatry group has developed management guidelines to support non-specialist podiatrists in the management of people with RA and related foot problems . However, these guidelines are now nine years old and, while they are practitioner-facing, they relate only to RA instead of the wider spectrum of IA, do not include children and young people, and are not sensitive to the foot health needs of people from different cultural and religious backgrounds (e.g. footwear and orthotic device appearance and preferences for different materials). There are also standards embedded in the NICE guideline for RA highlighting the need to treat foot pathology . These standards have an underpinning philosophy that encourages empowered self-care, patient involvement in service design, tailoring of services to patients’ needs, promotion of informed choice, and timely and appropriate access to services where needed. However, they do not make recommendations about specific aspects of clinical management, such as particular foot problems and different rheumatic diseases. Furthermore, there is limited guidance as to when foot health experts should be consulted and there are no recommendations on which assessments or interventions should be performed. Finally, the Arthritis and Musculoskeletal Alliance (ARMA) have previously produced generic patient-facing Standards of Care for people with musculoskeletal foot health problems and children and young people with JIA . These outlined what patients should expect from services at that time, and the latter recognize the need for specialist podiatry reviews for people with JIA. Despite this, neither provide clinicians with guidance for treatment and both standards are now over a decade old. A survey of non-specialist podiatrists in the UK found that over 95% were unaware of existing guidelines . There is a clear need for a national evidence-based clinical guideline for foot health that is aimed at clinicians and focused on IA more widely. This new BSR guideline will address the assessment and management of foot health in adults, children and young people with IA and will focus specifically on the healthcare setting in the UK. Clear, culturally sensitive guidance regarding evidence-based strategies for the management of foot problems in people with IA will help all clinicians to provide high-quality care for their patients, service providers, and commissioners, to ensure that adequately resourced foot health services are available to meet the needs of patients. The guideline will provide advice to enable a range of clinicians to provide first-line therapy for foot conditions where access to podiatry is limited, and advise when specialist opinions should be sought. Key facts and figures IA is an umbrella term encompassing a range of chronic, autoimmune conditions characterised by joint inflammation. These include RA, JIA and spondyloarthropathy (SpA) – which encompasses PsA, AS, reactive arthritis, enteropathic arthritis and undifferentiated SpA. Despite the introduction of aggressive pharmacological therapies, foot problems frequently occur in IA. Around 90% of people with RA experience foot problems during the course of the disease, including rearfoot and forefoot deformity, tibialis posterior dysfunction, peripheral arthritis, and subluxation and dislocation of the MTP joints, leading to pain and reduced walking ability . The foot is also particularly susceptible to damage in JIA. Foot-related impairment and disability has been shown to persist in over 90% of children and young people with the condition, despite intensive pharmacological therapy , and foot problems are also common in adults with JIA. In SpA, foot problems also include peripheral arthritis, dactylitis and enthesitis, particularly at the Achilles tendon, plantar fascia and tibialis posterior tendon insertions but also at many other sites. In PsA specifically, forefoot deformity affects over 90% of people with the condition, while almost two-thirds experience foot pain . Other extra-articular features of IA can manifest in the foot, including peripheral neuropathy and entrapment neuropathies. The risk of peripheral arterial disease is also increased in IA, particularly amongst individuals with a history of steroid use . Long-term steroid use can also contribute to poor tissue viability in the foot, and when combined with joint deformity and poor vascular supply, the risk of tissue breakdown is significantly increased. Foot ulcers are common in IA , and in immunosuppressed patients these carry an increased risk of potentially serious infection. Despite significant advances in the pharmacological management of IA with the advent of whole new classes of medication and a new treatment paradigm, foot problems persist. The impact of foot damage in IA is often underestimated and trivialized, yet people with foot problems consistently report marked reduction in their quality of life, indicating that the impact of foot disorders extends well beyond localized pain and discomfort . Current practice A person with IA might present to primary or secondary care with foot problems. In some cases, foot problems precede the diagnosis of IA; 31% of cases are known to have had their first IA symptoms in their feet . After diagnosis, foot problems may be monitored as part of the assessment of overall prognosis and disease activity, potentially with follow-up imaging. Some units run multidisciplinary clinics, but more commonly people with foot problems are seen separately by rheumatology and podiatry, sometimes with additional input from physiotherapy, orthotics, occupational therapy or orthopaedic surgery. The presentation of foot problems in IA is diverse, with some people experiencing problems with general nail and skin care, whereas others have pain and change in foot shape and posture (deformity), making it difficult to walk and find suitable footwear. Such problems can have specific challenges for children and young people as their musculoskeletal system is growing and developing. Others may present with neurological or vascular disease affecting the feet, or with foot ulcers. Treatments for foot pain and deformity can include general nail care, callus debridement, foot orthoses, footwear advice and provision, stretching and strengthening exercises, lifestyle advice and surgery. People with persistent active inflammation in the foot may benefit from corticosteroid injections or a change in systemic treatment. Variation in practice is widespread and the inadequacies of foot care in the UK are well documented. Although National Early Inflammatory Arthritis Audit (NEIAA) data indicated that 76% of rheumatology departments had podiatry access in 2020 , a 2021 BSR rheumatology workforce report highlighted that 80% of departments do not have a podiatrist embedded in their multidisciplinary team . Additionally, a recent clinical audit into the adherence of foot health management standards of RA across six National Health Service (NHS) Trusts found that only one podiatry department had the facility to see people with RA within six weeks of their initial diagnosis , while cross-sectional and longitudinal cohort studies in secondary care frequently report that only 30% to 40% of people with RA access any form of professional foot care or surgery . It has been suggested that better integration of foot health services into rheumatology would be beneficial for people with IA . Referral pathways are often unclear, and the lack of podiatrists within specialist teams means many patients seek foot care from the independent sector or non-specialist podiatrists who may not have the specific knowledge to manage their problems. Poor compliance with current foot health standards amongst podiatry departments is prevalent .
IA is an umbrella term encompassing a range of chronic, autoimmune conditions characterised by joint inflammation. These include RA, JIA and spondyloarthropathy (SpA) – which encompasses PsA, AS, reactive arthritis, enteropathic arthritis and undifferentiated SpA. Despite the introduction of aggressive pharmacological therapies, foot problems frequently occur in IA. Around 90% of people with RA experience foot problems during the course of the disease, including rearfoot and forefoot deformity, tibialis posterior dysfunction, peripheral arthritis, and subluxation and dislocation of the MTP joints, leading to pain and reduced walking ability . The foot is also particularly susceptible to damage in JIA. Foot-related impairment and disability has been shown to persist in over 90% of children and young people with the condition, despite intensive pharmacological therapy , and foot problems are also common in adults with JIA. In SpA, foot problems also include peripheral arthritis, dactylitis and enthesitis, particularly at the Achilles tendon, plantar fascia and tibialis posterior tendon insertions but also at many other sites. In PsA specifically, forefoot deformity affects over 90% of people with the condition, while almost two-thirds experience foot pain . Other extra-articular features of IA can manifest in the foot, including peripheral neuropathy and entrapment neuropathies. The risk of peripheral arterial disease is also increased in IA, particularly amongst individuals with a history of steroid use . Long-term steroid use can also contribute to poor tissue viability in the foot, and when combined with joint deformity and poor vascular supply, the risk of tissue breakdown is significantly increased. Foot ulcers are common in IA , and in immunosuppressed patients these carry an increased risk of potentially serious infection. Despite significant advances in the pharmacological management of IA with the advent of whole new classes of medication and a new treatment paradigm, foot problems persist. The impact of foot damage in IA is often underestimated and trivialized, yet people with foot problems consistently report marked reduction in their quality of life, indicating that the impact of foot disorders extends well beyond localized pain and discomfort .
A person with IA might present to primary or secondary care with foot problems. In some cases, foot problems precede the diagnosis of IA; 31% of cases are known to have had their first IA symptoms in their feet . After diagnosis, foot problems may be monitored as part of the assessment of overall prognosis and disease activity, potentially with follow-up imaging. Some units run multidisciplinary clinics, but more commonly people with foot problems are seen separately by rheumatology and podiatry, sometimes with additional input from physiotherapy, orthotics, occupational therapy or orthopaedic surgery. The presentation of foot problems in IA is diverse, with some people experiencing problems with general nail and skin care, whereas others have pain and change in foot shape and posture (deformity), making it difficult to walk and find suitable footwear. Such problems can have specific challenges for children and young people as their musculoskeletal system is growing and developing. Others may present with neurological or vascular disease affecting the feet, or with foot ulcers. Treatments for foot pain and deformity can include general nail care, callus debridement, foot orthoses, footwear advice and provision, stretching and strengthening exercises, lifestyle advice and surgery. People with persistent active inflammation in the foot may benefit from corticosteroid injections or a change in systemic treatment. Variation in practice is widespread and the inadequacies of foot care in the UK are well documented. Although National Early Inflammatory Arthritis Audit (NEIAA) data indicated that 76% of rheumatology departments had podiatry access in 2020 , a 2021 BSR rheumatology workforce report highlighted that 80% of departments do not have a podiatrist embedded in their multidisciplinary team . Additionally, a recent clinical audit into the adherence of foot health management standards of RA across six National Health Service (NHS) Trusts found that only one podiatry department had the facility to see people with RA within six weeks of their initial diagnosis , while cross-sectional and longitudinal cohort studies in secondary care frequently report that only 30% to 40% of people with RA access any form of professional foot care or surgery . It has been suggested that better integration of foot health services into rheumatology would be beneficial for people with IA . Referral pathways are often unclear, and the lack of podiatrists within specialist teams means many patients seek foot care from the independent sector or non-specialist podiatrists who may not have the specific knowledge to manage their problems. Poor compliance with current foot health standards amongst podiatry departments is prevalent .
This guideline is for rheumatologists, general practitioners, orthopaedic surgeons, allied health professionals, and specialist rheumatology nurses involved in the management of people with foot problems in IA; people with foot problems in IA and their carers. Equality considerations none known.
3.1 Who is the focus? Groups that will be covered Adults with IA affecting the foot; Children and young people with IA affecting the foot. 3.2 Settings Settings that will be covered Primary care and community settings; Secondary and tertiary care settings. 3.3 Activities, services or aspects of care Key areas that will be covered We will look at evidence in the areas below when developing the guideline, but it may not be possible to make recommendations in all the areas. Treatment of people Foot problems (including pain, deformity, nail and skin pathologies, ulceration, reduced circulation and neuropathy) in people with the following rheumatic diseases: RA; SpA, including PsA, AS, enteropathic arthritis, reactive arthritis and undifferentiated SpA; JIA. Assessment and diagnosis Assessments; Imaging; Referral to specialist foot services. Treatment strategy personalized care; orthotic devices; footwear; targeted exercises and gait rehabilitation; nail and skin care; wound management; targeted injection therapy; reviewing systemic disease control; surgical referral; follow-up and monitoring. Secondary prevention physical activity; smoking; weight loss. Areas that will not be covered Surgical procedures; Treatment of traumatic foot injuries; Systemic drug therapy. Related guidance The North West Podiatry Services Clinical Effectiveness Group’s 2010 rheumatology guidelines for the management of foot health for people with rheumatoid arthritis ; NICE guideline [NG100] for rheumatoid arthritis in adults: management in 2018 ; ARMA Standards of Care for People with Musculoskeletal Foot Health Problems in 2008 ; ARMA Standards of Care for Children and Young People with Juvenile Idiopathic Arthritis in 2010 . 3.4 Key issues and draft questions While writing this scope, we have identified the following key issues and draft questions related to them. The key issues and draft questions will be used to develop more detailed review questions, which guide the systematic review of the literature. Assessment and diagnosis Assessments 1. In adults or children and young people with suspected or confirmed IA, what clinical assessments should be undertaken when assessing foot health and disease activity, and how often? Imaging 2. In adults or children and young people with suspected or confirmed IA, what imaging should be requested when assessing foot health, and when should imaging be requested? Referral to specialist foot services 3. When should adults or children and young people with suspected or confirmed IA be referred to specialist foot services, e.g. podiatry? Treatment strategy Personalised care 4. In adults or children and young people with foot problems in IA, what personalised care (e.g. support for self-management, activation, shared decision making and culturally-sensitive education) relating to foot health, and considering a person’s wider biopsychosocial health determinants, should be provided and when? Orthotic devices 5. In adults or children and young people with foot problems in IA, are orthotic devices effective, when are they indicated, and which types of orthotic devices are effective? Footwear 6. In adults or children and young people with foot problems in IA, what types of footwear are effective? Targeted exercises, gait rehabilitation and electrophysical therapies 7. In adults or children and young people with foot problems in IA, what frequency, intensity, type and time (duration) of exercises, gait rehabilitation and electrophysical therapies is effective? Nail and skin care 8. In adults or children and young people with common toenail pathologies in IA, what conservative treatments are effective, and when should abnormal nails be surgically removed? 9. In adults or children and young people with common skin pathologies (e.g. callus) in IA, what treatments are effective? Wound management 10. In adults or children and young people with foot ulceration in IA, including infected foot ulcers, what treatments are effective? Targeted injection therapy 11. In adults or children and young people with foot problems in IA, are local corticosteroid injections safe and effective, and if so, when should these be offered? Reviewing systemic disease control 12. When should local foot symptoms prompt a review of systemic disease control in adults or children and young people with IA? Surgical referral 13. In adults or children and young people with foot problems in IA, when should a surgical referral be considered? 14. In patients requiring foot and ankle surgical procedures, including nail surgery, should biologics/DMARDs be stopped, when should they be stopped, and for how long? Follow-up and monitoring 15. How often should foot health be reassessed in adults or children and young people with IA? 16. In young people with IA who are transitioning from paediatric to adult care, how should foot health be incorporated? Secondary prevention Physical activity 17. In adults or children and young people with foot problems in IA, what is the clinical effectiveness of physical activity? Smoking 18. In adults or children and young people with foot problems in IA who smoke, what is the clinical effectiveness of giving up smoking? Weight loss 19. In adults or children and young people with foot problems in IA who are overweight or obese, what is the clinical effectiveness of weight loss? The guideline is expected to be published in 2023.
Groups that will be covered Adults with IA affecting the foot; Children and young people with IA affecting the foot.
Adults with IA affecting the foot; Children and young people with IA affecting the foot.
Settings that will be covered Primary care and community settings; Secondary and tertiary care settings.
Primary care and community settings; Secondary and tertiary care settings.
Key areas that will be covered We will look at evidence in the areas below when developing the guideline, but it may not be possible to make recommendations in all the areas. Treatment of people Foot problems (including pain, deformity, nail and skin pathologies, ulceration, reduced circulation and neuropathy) in people with the following rheumatic diseases: RA; SpA, including PsA, AS, enteropathic arthritis, reactive arthritis and undifferentiated SpA; JIA. Assessment and diagnosis Assessments; Imaging; Referral to specialist foot services. Treatment strategy personalized care; orthotic devices; footwear; targeted exercises and gait rehabilitation; nail and skin care; wound management; targeted injection therapy; reviewing systemic disease control; surgical referral; follow-up and monitoring.
We will look at evidence in the areas below when developing the guideline, but it may not be possible to make recommendations in all the areas.
Foot problems (including pain, deformity, nail and skin pathologies, ulceration, reduced circulation and neuropathy) in people with the following rheumatic diseases: RA; SpA, including PsA, AS, enteropathic arthritis, reactive arthritis and undifferentiated SpA; JIA.
Assessments; Imaging; Referral to specialist foot services.
personalized care; orthotic devices; footwear; targeted exercises and gait rehabilitation; nail and skin care; wound management; targeted injection therapy; reviewing systemic disease control; surgical referral; follow-up and monitoring.
physical activity; smoking; weight loss. Areas that will not be covered Surgical procedures; Treatment of traumatic foot injuries; Systemic drug therapy. Related guidance The North West Podiatry Services Clinical Effectiveness Group’s 2010 rheumatology guidelines for the management of foot health for people with rheumatoid arthritis ; NICE guideline [NG100] for rheumatoid arthritis in adults: management in 2018 ; ARMA Standards of Care for People with Musculoskeletal Foot Health Problems in 2008 ; ARMA Standards of Care for Children and Young People with Juvenile Idiopathic Arthritis in 2010 .
Surgical procedures; Treatment of traumatic foot injuries; Systemic drug therapy.
The North West Podiatry Services Clinical Effectiveness Group’s 2010 rheumatology guidelines for the management of foot health for people with rheumatoid arthritis ; NICE guideline [NG100] for rheumatoid arthritis in adults: management in 2018 ; ARMA Standards of Care for People with Musculoskeletal Foot Health Problems in 2008 ; ARMA Standards of Care for Children and Young People with Juvenile Idiopathic Arthritis in 2010 .
While writing this scope, we have identified the following key issues and draft questions related to them. The key issues and draft questions will be used to develop more detailed review questions, which guide the systematic review of the literature. Assessment and diagnosis Assessments 1. In adults or children and young people with suspected or confirmed IA, what clinical assessments should be undertaken when assessing foot health and disease activity, and how often? Imaging 2. In adults or children and young people with suspected or confirmed IA, what imaging should be requested when assessing foot health, and when should imaging be requested? Referral to specialist foot services 3. When should adults or children and young people with suspected or confirmed IA be referred to specialist foot services, e.g. podiatry? Treatment strategy Personalised care 4. In adults or children and young people with foot problems in IA, what personalised care (e.g. support for self-management, activation, shared decision making and culturally-sensitive education) relating to foot health, and considering a person’s wider biopsychosocial health determinants, should be provided and when? Orthotic devices 5. In adults or children and young people with foot problems in IA, are orthotic devices effective, when are they indicated, and which types of orthotic devices are effective? Footwear 6. In adults or children and young people with foot problems in IA, what types of footwear are effective? Targeted exercises, gait rehabilitation and electrophysical therapies 7. In adults or children and young people with foot problems in IA, what frequency, intensity, type and time (duration) of exercises, gait rehabilitation and electrophysical therapies is effective? Nail and skin care 8. In adults or children and young people with common toenail pathologies in IA, what conservative treatments are effective, and when should abnormal nails be surgically removed? 9. In adults or children and young people with common skin pathologies (e.g. callus) in IA, what treatments are effective? Wound management 10. In adults or children and young people with foot ulceration in IA, including infected foot ulcers, what treatments are effective? Targeted injection therapy 11. In adults or children and young people with foot problems in IA, are local corticosteroid injections safe and effective, and if so, when should these be offered? Reviewing systemic disease control 12. When should local foot symptoms prompt a review of systemic disease control in adults or children and young people with IA? Surgical referral 13. In adults or children and young people with foot problems in IA, when should a surgical referral be considered? 14. In patients requiring foot and ankle surgical procedures, including nail surgery, should biologics/DMARDs be stopped, when should they be stopped, and for how long? Follow-up and monitoring 15. How often should foot health be reassessed in adults or children and young people with IA? 16. In young people with IA who are transitioning from paediatric to adult care, how should foot health be incorporated? Secondary prevention Physical activity 17. In adults or children and young people with foot problems in IA, what is the clinical effectiveness of physical activity? Smoking 18. In adults or children and young people with foot problems in IA who smoke, what is the clinical effectiveness of giving up smoking? Weight loss 19. In adults or children and young people with foot problems in IA who are overweight or obese, what is the clinical effectiveness of weight loss? The guideline is expected to be published in 2023.
Assessments 1. In adults or children and young people with suspected or confirmed IA, what clinical assessments should be undertaken when assessing foot health and disease activity, and how often? Imaging 2. In adults or children and young people with suspected or confirmed IA, what imaging should be requested when assessing foot health, and when should imaging be requested? Referral to specialist foot services 3. When should adults or children and young people with suspected or confirmed IA be referred to specialist foot services, e.g. podiatry?
Personalised care 4. In adults or children and young people with foot problems in IA, what personalised care (e.g. support for self-management, activation, shared decision making and culturally-sensitive education) relating to foot health, and considering a person’s wider biopsychosocial health determinants, should be provided and when? Orthotic devices 5. In adults or children and young people with foot problems in IA, are orthotic devices effective, when are they indicated, and which types of orthotic devices are effective? Footwear 6. In adults or children and young people with foot problems in IA, what types of footwear are effective? Targeted exercises, gait rehabilitation and electrophysical therapies 7. In adults or children and young people with foot problems in IA, what frequency, intensity, type and time (duration) of exercises, gait rehabilitation and electrophysical therapies is effective? Nail and skin care 8. In adults or children and young people with common toenail pathologies in IA, what conservative treatments are effective, and when should abnormal nails be surgically removed? 9. In adults or children and young people with common skin pathologies (e.g. callus) in IA, what treatments are effective? Wound management 10. In adults or children and young people with foot ulceration in IA, including infected foot ulcers, what treatments are effective? Targeted injection therapy 11. In adults or children and young people with foot problems in IA, are local corticosteroid injections safe and effective, and if so, when should these be offered? Reviewing systemic disease control 12. When should local foot symptoms prompt a review of systemic disease control in adults or children and young people with IA? Surgical referral 13. In adults or children and young people with foot problems in IA, when should a surgical referral be considered? 14. In patients requiring foot and ankle surgical procedures, including nail surgery, should biologics/DMARDs be stopped, when should they be stopped, and for how long? Follow-up and monitoring 15. How often should foot health be reassessed in adults or children and young people with IA? 16. In young people with IA who are transitioning from paediatric to adult care, how should foot health be incorporated?
Physical activity 17. In adults or children and young people with foot problems in IA, what is the clinical effectiveness of physical activity? Smoking 18. In adults or children and young people with foot problems in IA who smoke, what is the clinical effectiveness of giving up smoking? Weight loss 19. In adults or children and young people with foot problems in IA who are overweight or obese, what is the clinical effectiveness of weight loss? The guideline is expected to be published in 2023.
Edward Roddy (WG lead) – Rheumatologist Mike Backhouse (WG deputy lead) – Podiatrist Lara Chapman (lead author) – Podiatrist Louise Warburton – GP Jasmine Davey – Lay member Alan Rawlings – Lay member Susan Varley – Lay member Adele Whitgreave – Lay member Adam Lomax – Orthopaedic foot/ankle surgeon Rob Rees – Orthopaedic foot/ankle surgeon Robbie Rooney – Orthotist Rachel Ferguson – Paediatric Podiatrist Gavin Cleary – Paediatric Rheumatologist Lindsay Bearne – Physiotherapist Lindsey Cherry – Podiatrist Helen McKeeman – Podiatrist Lucy Saunders – Podiatrist Heidi Siddle – Podiatrist Jim Woodburn – Podiatrist Philip Helliwell – Rheumatologist Sarah Ryan – Rheumatology nurse Funding: This work was supported by the British Society for Rheumatology. Disclosure statement: L.W. has received payment from Novartis for an online lecture. No other authors have conflicts of interest to declare.
|
West Nile Virus Seroprevalence Among Outdoor Workers in Southern Italy: Unveiling Occupational Risks and Public Health Implications | 160acb2a-7f8b-4121-8aa2-39f9bd676781 | 11946618 | Medicine[mh] | West Nile virus (WNV) is an emerging single-stranded RNA virus belonging to the genus Orthoflavivirus (Family Flaviviridae ). It is classified within the Japanese encephalitis serocomplex . WNV exists in two main genetic lineages: lineage 1 (WNV-1), which is prevalent across the Americas, North Africa, Europe, and Australia, and lineage 2 (WNV-2), which is endemic to South Africa and Madagascar and has also been present in Europe since 2004 . The primary mode of transmission for WNV is through the bite of infected Culex mosquitoes, with birds acting as natural reservoirs that amplify the virus in the environment. Mammals, including humans and equids, are incidental hosts of WNV . While they can develop clinical signs upon infection, the severity and frequency of symptoms vary. In humans, the majority of infections remain asymptomatic. However, in approximately 20–30% of cases, particularly among the elderly or immunocompromised individuals, flu-like symptoms collectively referred to as West Nile fever (WNF) may develop after an incubation period of 3–14 days. Mosquito bites are the primary route of infection, though rare cases of human-to-human transmission have been reported through organ transplantation, blood transfusions, breastfeeding, transplacental transmission, and occupational exposure in laboratory settings . Severe cases can develop into complications such as hepatomegaly, splenomegaly, myocarditis, pancreatitis, or hepatitis. Fewer than 1% of infections progress to West Nile neuroinvasive disease (WNND), which can manifest as meningitis, encephalitis, or flaccid paralysis and may result in fatal outcomes . Both biotic factors like bird migration and mosquito activity, and abiotic factors such as local climate conditions, significantly influence the transmission dynamics of WNV . Climate change can affect the biological cycles of vectors and the abundance of animal reservoirs, potentially increasing WNV infection incidence . In recent decades, Europe, including Italy, has experienced an increased frequency, geographic spread, and incidence of WNV outbreaks in humans and equids . Between 2012 and 2020, Italy reported 1145 WNV infections, including 487 cases with neurological symptoms across 11 regions . Since the start of the 2023 transmission season, Italy has recorded an increasing number of human WNV infections, with 439 cases identified . While northern Italian regions were already considered endemic for WNV, the Apulia region saw a sharp increase in WNV infections between July and October 2023, marking a 10-year high since the first autochthonous case. Consequently, the Apulia region’s risk classification changed from low to high. During surveillance from January to October 2023, eight cases of WNV infection were identified in the Apulia region, with an estimated notification rate of 0.2 per 100,000 cases. All were reported between July and October, with six cases presenting as WNND . By early May 2024, 452 confirmed cases of WNV infection had been reported in Italy, including 271 neuro-invasive cases (4 in Apulia) and 46 asymptomatic cases in blood donors . A growing concern in WNV epidemiology is the risk of occupational exposure to mosquito bites, particularly among outdoor workers . However, comparative seroprevalence studies involving different outdoor occupational groups are limited, making it challenging to quantify the additional risk posed by occupational exposure. Moreover, outdoor workers have not been systematically investigated, despite potentially facing a heightened risk of exposure to mosquito bites. Given these concerns, this study aimed to assess the seroprevalence of WNV and associated occupational risk factors among outdoor workers in Southern Italy.
2.1. Study Design This cross-sectional study was conducted between November 2023 and April 2024 in Apulia, located in southeastern Italy, and characterized by a Mediterranean climate with hot, dry summers and mild, wet winters . In , the specific municipalities where the workers were recruited are shown. A minimum sample size of 85 subjects was determined a priori, based on the WNV prevalence reported in a previous study involving a population recruited in the same area . Calculations assumed a 2% margin of error and a 95% confidence interval to ensure robust sampling and statistical validity for this population. 2.2. Occupation Selection and Workers’ Recruitment The occupations selected for this study, forestry workers, livestock handlers, agricultural workers, veterinarians, and horse breeders, were chosen due to the workers’ potential high levels of outdoor activity and frequent encounters with mosquito habitats, both of which may potentially increase WNV exposure risk. Recruitment of forestry workers, farmers, and livestock handlers occurred during educational meetings held across the Apulian region, in the course of which the study objectives and the WNV risks associated with mosquito bites were discussed. Enrolled forestry workers spent significant time in wooded and rural areas, maintaining forest health and clearing undergrowth. Farmers were involved in crop production, including water management practices, such as maintaining irrigation channels or collecting rainwater. Livestock handlers had contact with different types of animals. They performed a variety of daily activities centered on the care and management of livestock but also involving environmental maintenance, such as cleaning shelters, managing waste, and repairing fences or growing shelters. Veterinarians primarily working with livestock and large animals in outdoor or partially open farm settings were invited to participate through voluntary health promotion programs aligned with Italian regulations for worker health protection (D. Lgs. 81/2008). Additionally, a targeted group of military horse breeders was recruited at a specialized equestrian selection centre where a confirmed equine WNV infection had been detected about 6 weeks before the beginning of the study. Participants had to meet the following inclusion criteria: to be at least 18 years old, to have been working in their respective occupation for at least 1 year, and to typically spend at least one-third of their daily working time in outdoor environments. Additionally, they were required to have no known history of immunodeficiency or prior confirmed WNV infection. 2.3. Questionnaire Each participant completed a structured questionnaire to collect information on socio-demographic background: age, sex, and education. occupational exposure: job characteristics, working seniority, specific tasks performed, and the proportion of time spent working outdoors, history of exposure to mosquitoes at work and during leisure activities, use of repellents and personal protective equipment (PPE). travel and vaccination history: recent travel to WNV-endemic areas, as well as vaccinations for other arboviruses like tick-borne encephalitis virus (TBEV), yellow fever virus (YFV), and Japanese encephalitis virus (JEV). health history and symptoms: any previous infections with flaviviruses, and any symptoms compatible with WNF or WNND, experienced between April and November 2023. Geographical location of working areas and worker’s residences. Other outdoor activities such as camping, gardening, hunting. The study was conducted following the Declaration of Helsinki and was consistent with ethical public health practice; it was approved by the ethics committee of the University Hospital of Bari (Italy) (approval no. 7770, protocol no. 0053676-08062023). 2.4. Serological Examination For each participant, a 10 mL blood sample was collected in a Vacutainer tube. Serum samples were obtained after centrifugation at 3000× g for 10 min and stored at −20 °C until analysis. All samples were tested by commercial ELISA (CHORUS West Nile Virus IgG; DIESSE- Diagnostica Senese S.p.A., Monteriggioni (SI), Italy) to detect IgG antibodies against WNV. Testing was performed according to the manufacturer’s instructions. Sera that scored IgG positive or borderline were further tested for the detection of IgM antibodies against WNV (CHORUS West Nile Virus IgM; DIESSE, Monteriggioni, Italy) and IgG antibodies against Dengue, Zika, Toscana and Chikungunya viruses (CHORUS Dengue VIRUS IgG; DIESSE, Monteriggioni, Italy; CHORUS Zika VIRUS IgG; DIESSE, Monteriggioni, Italy; CHORUS Toscana Virus IgG; DIESSE, Monteriggioni, Italy; CHORUS Chikungunya Virus IgG; DIESSE, Monteriggioni, Italy) to evaluate possible cross-reactivity between other arboviruses. For microneutralization (MN) and plaque reduction neutralization (PRN) tests, Vero E6 (African green monkey kidney cell line; ATCC ® CRL-1586™) propagated in Dulbecco’s Modified Eagles Medium (DMEM; Sigma-Aldrich, St. Louis, MO, USA) supplemented with 10% Fetal Bovine Serum (FBS; Sigma-Aldrich, St. Louis, MO, USA) were used. The WNV strain (lineage 2) viral stock, derived from cell-free supernatants of acutely infected Vero E6 cells, was titrated for a 50% tissue culture infectious dose (TCID 50 ) and plaque-forming unit (PFU) in Vero E6 cells and stored at −80 °C until used. All serum samples were heat-inactivated at 56 °C for 30 min. In the MN assay, serial twofold dilutions of heat-inactivated serum in DMEM (1:10 to 1:320) were mixed (1:1) with 100 TCID 50 of WNV and incubated for 1 h at 37 °C, 5% CO 2 . The serum/virus (50 µL) mixture was plated on Vero E6 cell monolayers (10 4 cells/well) in the well of a 96-well plate and incubated for 1 h at 37 °C, 5% CO 2 . Then, 50 µL of DMEM was added to each well and the plate was incubated for 4 days up to the appearance of a cytopathic effect in control cultures (cell monolayers exposed to WNV). Serum negative for WNV was used as a control. The antibody titer was defined as the reciprocal of the highest dilution of the test serum sample, which showed at least 50% neutralization. A PRN assay was performed by exposing (1:1) serial twofold dilutions of heat-inactivated serum in DMEM (1:10 to 1:320) to 100 PFU of WNV. After incubation for 1 h at 37 °C, 5% CO 2 atmosphere, 300 µL of the serum/virus mixture was plated on each well of 6-well plates seeded with 2.5 × 10 5 Vero E6 cells and incubated 1 h at 37 °C. Then, the overlay medium composed of 0.5% Sea Plaque Agarose (Lonza, Basel, Switzerland) in propagation medium was added to each well. After 4 days of incubation at 37 °C, the monolayers were fixed with methanol (Carlo Erba Chemicals, Milan, Italy) and stained with 0.1% crystal violet (Carlo Erba Chemicals, Milan, Italy) and the viral titers were determined by PFU counting. The percentage of PRN was calculated by dividing the average PFU of viral serum-treated samples by the average of the viral positive control. All experiments were repeated at least twice. All experimental procedures were conducted under biosafety level 3 containment. According to the EU case definition, a positive case was a subject with a WNV-specific antibody response in serum as an IgG titer and confirmation by neutralization . 2.5. Statistical Analysis Statistical analyses were performed using SPSS software (version 14.0, Chicago, IL, USA) with parametric or non-parametric methods in the case of a non-normal data distribution. A p -value of less than 0.05 was considered statistically significant. The prevalence of WNV infection was calculated as the ratio between samples confirmed positive by PRN and all the tested samples. The 95% confidence interval of the prevalence was calculated using a binomial distribution. A Chi-square or Fisher test was used to evaluate the association between WNV positivity and specific demographic or occupational factors. A stepwise logistic regression analysis was used to evaluate the association between all the risk factors evaluated through the questionnaire and the WNV positivity.
This cross-sectional study was conducted between November 2023 and April 2024 in Apulia, located in southeastern Italy, and characterized by a Mediterranean climate with hot, dry summers and mild, wet winters . In , the specific municipalities where the workers were recruited are shown. A minimum sample size of 85 subjects was determined a priori, based on the WNV prevalence reported in a previous study involving a population recruited in the same area . Calculations assumed a 2% margin of error and a 95% confidence interval to ensure robust sampling and statistical validity for this population.
The occupations selected for this study, forestry workers, livestock handlers, agricultural workers, veterinarians, and horse breeders, were chosen due to the workers’ potential high levels of outdoor activity and frequent encounters with mosquito habitats, both of which may potentially increase WNV exposure risk. Recruitment of forestry workers, farmers, and livestock handlers occurred during educational meetings held across the Apulian region, in the course of which the study objectives and the WNV risks associated with mosquito bites were discussed. Enrolled forestry workers spent significant time in wooded and rural areas, maintaining forest health and clearing undergrowth. Farmers were involved in crop production, including water management practices, such as maintaining irrigation channels or collecting rainwater. Livestock handlers had contact with different types of animals. They performed a variety of daily activities centered on the care and management of livestock but also involving environmental maintenance, such as cleaning shelters, managing waste, and repairing fences or growing shelters. Veterinarians primarily working with livestock and large animals in outdoor or partially open farm settings were invited to participate through voluntary health promotion programs aligned with Italian regulations for worker health protection (D. Lgs. 81/2008). Additionally, a targeted group of military horse breeders was recruited at a specialized equestrian selection centre where a confirmed equine WNV infection had been detected about 6 weeks before the beginning of the study. Participants had to meet the following inclusion criteria: to be at least 18 years old, to have been working in their respective occupation for at least 1 year, and to typically spend at least one-third of their daily working time in outdoor environments. Additionally, they were required to have no known history of immunodeficiency or prior confirmed WNV infection.
Each participant completed a structured questionnaire to collect information on socio-demographic background: age, sex, and education. occupational exposure: job characteristics, working seniority, specific tasks performed, and the proportion of time spent working outdoors, history of exposure to mosquitoes at work and during leisure activities, use of repellents and personal protective equipment (PPE). travel and vaccination history: recent travel to WNV-endemic areas, as well as vaccinations for other arboviruses like tick-borne encephalitis virus (TBEV), yellow fever virus (YFV), and Japanese encephalitis virus (JEV). health history and symptoms: any previous infections with flaviviruses, and any symptoms compatible with WNF or WNND, experienced between April and November 2023. Geographical location of working areas and worker’s residences. Other outdoor activities such as camping, gardening, hunting. The study was conducted following the Declaration of Helsinki and was consistent with ethical public health practice; it was approved by the ethics committee of the University Hospital of Bari (Italy) (approval no. 7770, protocol no. 0053676-08062023).
For each participant, a 10 mL blood sample was collected in a Vacutainer tube. Serum samples were obtained after centrifugation at 3000× g for 10 min and stored at −20 °C until analysis. All samples were tested by commercial ELISA (CHORUS West Nile Virus IgG; DIESSE- Diagnostica Senese S.p.A., Monteriggioni (SI), Italy) to detect IgG antibodies against WNV. Testing was performed according to the manufacturer’s instructions. Sera that scored IgG positive or borderline were further tested for the detection of IgM antibodies against WNV (CHORUS West Nile Virus IgM; DIESSE, Monteriggioni, Italy) and IgG antibodies against Dengue, Zika, Toscana and Chikungunya viruses (CHORUS Dengue VIRUS IgG; DIESSE, Monteriggioni, Italy; CHORUS Zika VIRUS IgG; DIESSE, Monteriggioni, Italy; CHORUS Toscana Virus IgG; DIESSE, Monteriggioni, Italy; CHORUS Chikungunya Virus IgG; DIESSE, Monteriggioni, Italy) to evaluate possible cross-reactivity between other arboviruses. For microneutralization (MN) and plaque reduction neutralization (PRN) tests, Vero E6 (African green monkey kidney cell line; ATCC ® CRL-1586™) propagated in Dulbecco’s Modified Eagles Medium (DMEM; Sigma-Aldrich, St. Louis, MO, USA) supplemented with 10% Fetal Bovine Serum (FBS; Sigma-Aldrich, St. Louis, MO, USA) were used. The WNV strain (lineage 2) viral stock, derived from cell-free supernatants of acutely infected Vero E6 cells, was titrated for a 50% tissue culture infectious dose (TCID 50 ) and plaque-forming unit (PFU) in Vero E6 cells and stored at −80 °C until used. All serum samples were heat-inactivated at 56 °C for 30 min. In the MN assay, serial twofold dilutions of heat-inactivated serum in DMEM (1:10 to 1:320) were mixed (1:1) with 100 TCID 50 of WNV and incubated for 1 h at 37 °C, 5% CO 2 . The serum/virus (50 µL) mixture was plated on Vero E6 cell monolayers (10 4 cells/well) in the well of a 96-well plate and incubated for 1 h at 37 °C, 5% CO 2 . Then, 50 µL of DMEM was added to each well and the plate was incubated for 4 days up to the appearance of a cytopathic effect in control cultures (cell monolayers exposed to WNV). Serum negative for WNV was used as a control. The antibody titer was defined as the reciprocal of the highest dilution of the test serum sample, which showed at least 50% neutralization. A PRN assay was performed by exposing (1:1) serial twofold dilutions of heat-inactivated serum in DMEM (1:10 to 1:320) to 100 PFU of WNV. After incubation for 1 h at 37 °C, 5% CO 2 atmosphere, 300 µL of the serum/virus mixture was plated on each well of 6-well plates seeded with 2.5 × 10 5 Vero E6 cells and incubated 1 h at 37 °C. Then, the overlay medium composed of 0.5% Sea Plaque Agarose (Lonza, Basel, Switzerland) in propagation medium was added to each well. After 4 days of incubation at 37 °C, the monolayers were fixed with methanol (Carlo Erba Chemicals, Milan, Italy) and stained with 0.1% crystal violet (Carlo Erba Chemicals, Milan, Italy) and the viral titers were determined by PFU counting. The percentage of PRN was calculated by dividing the average PFU of viral serum-treated samples by the average of the viral positive control. All experiments were repeated at least twice. All experimental procedures were conducted under biosafety level 3 containment. According to the EU case definition, a positive case was a subject with a WNV-specific antibody response in serum as an IgG titer and confirmation by neutralization .
Statistical analyses were performed using SPSS software (version 14.0, Chicago, IL, USA) with parametric or non-parametric methods in the case of a non-normal data distribution. A p -value of less than 0.05 was considered statistically significant. The prevalence of WNV infection was calculated as the ratio between samples confirmed positive by PRN and all the tested samples. The 95% confidence interval of the prevalence was calculated using a binomial distribution. A Chi-square or Fisher test was used to evaluate the association between WNV positivity and specific demographic or occupational factors. A stepwise logistic regression analysis was used to evaluate the association between all the risk factors evaluated through the questionnaire and the WNV positivity.
presents the demographic and occupational characteristics of the 250 outdoor workers enrolled in the study. illustrates the WNV seroprevalence rates across the investigated area. shows the univariate analysis results for associations between WNV seroprevalence and occupational or demographic factors. Among the 250 serum samples tested, eight (3.2%) were positive for WNV by PRN and/or MN assays. Livestock breeders exhibited the highest WNV seroprevalence at 6.5%, whereas agricultural and forestry workers showed lower rates at 1.4% and 2.7%, respectively. Notably, no seropositive cases were identified among horse breeders or veterinarians, with no cases detected in the 40–59 age group. Other factors, including gender, other outdoor activities, and the geographical area of workers’ residence, working in wetland areas, specific animal contact, detecting dead birds near worksites, and PPE use, did not show significant associations with WNV seropositivity. provides a detailed breakdown of WNV seroprevalence across occupations, including ELISA IgG positivity, borderline IgG, and PRA/MN positivity. Livestock breeders exhibited higher rates, with 8.7% ELISA IgG positivity and 6.5% PRA/MN positivity. Overall, across all occupational groups, ELISA IgG positivity was 4%, borderline IgG was 0.8%, and PRA/MN positivity was 3.2%. WNV seroprevalence was assessed using a two-step serological approach. Initially, ELISA was performed to detect WNV-specific IgG antibodies. Samples that tested positive or borderline in ELISA were subsequently analyzed using PRN and MN assays to confirm WNV specificity. Due to the higher specificity of PRN/MN, some ELISA-positive samples were not confirmed as WNV-positive, leading to differences in positivity rates between techniques. summarizes the results of the stepwise logistic regression analysis, highlighting age 40–59 years, repellent use, and PPE usage as significant protective factors that reduced the likelihood of WNV infection among outdoor workers. details the eight confirmed WNV infection cases, including demographic and occupational characteristics, travel history, and serological test results. The cases included seven males and one female, aged 26 to 74 years, with most individuals reporting contact with animals, primarily cattle. Of the eight positive subjects for WNV IgG antibodies who were further tested for Dengue, Zika, Chikungunya, and Toscana virus IgG, only one tested positive for Dengue virus. This individual, an Indian national, reported no travel in the past year and had been residing in Italy for approximately 2 years. However, this participant was not excluded from the study, as our questionnaire included specific questions regarding vaccination history, and the individual did not report having received a Dengue vaccine.
This study examines the seroprevalence of West Nile Virus (WNV) among outdoor workers in various occupations in the Apulia region (Southern Italy), identifying distinct occupational and demographic factors linked to WNV infection. Our findings reveal that cattle breeders exhibited a slightly higher seroprevalence rate compared to other outdoor workers, suggesting that specific job characteristics may elevate the occupational risk of WNV infection. Since the first human case of WNV infection was reported in Italy in 2007, the country has experienced a steady increase in both the geographical spread and the number of cases across different populations, including blood donors . A key study conducted in Tuscany from 2016 to 2019 reported seroprevalence rates between 0.5% and 0.9%, indicating a broader circulation of WNV than previously recognized . This aligns with the findings of Mencattelli et al. , whose comprehensive analysis of WNV lineage 2 in Italy, based on national surveillance data, highlights the progressive spread of the virus since its emergence. These studies underscore a worrying trend in Italy’s WNV epidemiological landscape. In our study, we found a seroprevalence of 3.2%, which was significantly higher than that observed in the general population of the same area, particularly among cattle breeders with a seroprevalence of 6.5%. This suggests an increased work-related risk of WNV infection in this specific outdoor population. In addition, a recent study indicates a low seroprevalence of WNV (0.32%) in blood donors from the Apulia region, suggesting that WNV circulation in the general population remains relatively limited. This rate is notably lower than the seroprevalence reported in occupationally exposed workers, such as livestock keepers and outdoor laborers in our study. The discrepancy in seroprevalence between these populations may be attributed to differences in exposure risk, as individuals engaged in outdoor professions experience more frequent mosquito bites and prolonged time spent in WNV-endemic environments. In addition to human seroprevalence, several studies have focused on animal populations, particularly horses, which serve as important sentinels for WNV spread. Research across various Italian regions has shown a notable increase in seroprevalence among equids, reflecting the virus’s presence in the environment and its potential risk to both animal and human health . While equids, like other mammals, are dead-end hosts and do not contribute to WNV transmission, their seroprevalence can provide valuable insights into the extent of viral activity in a given area. The integration of ecological and epidemiological data is critical for assessing WNV risk, especially considering changing climatic conditions that may influence vector populations and disease transmission patterns . The seroprevalence of WNV among various occupational groups has become a growing concern due to the increasing incidence of WNV infections and the potential work-related exposure risks . The observed seroprevalence of WNV among outdoor workers can be understood within the broader context of climate change and its impact on vector-borne diseases. In Europe, WNV has become a recurrent public health issue, with outbreaks rising in intensity, frequency, and geographic spread. This trend is largely driven by climate change, which creates more favorable conditions for the virus . In recent years, climate change has notably impacted the climatic patterns of the Apulia region, resulting in a noticeable increase in average temperatures and shifts in rainfall distribution . This warming trend has not only transformed local agricultural practices but has also created conditions that promote the proliferation of mosquitoes, particularly the Culex species, which are primary vectors of WNV in the region . The higher seroprevalence observed among outdoor workers in our study may be linked to these regional climate changes, which increase their exposure to mosquito bites due to the nature of their outdoor activities. Occupational risk analysis indicates that livestock breeders face a higher risk of WNV exposure compared to other outdoor occupational groups, likely due to the nature of their work, which involves frequent exposure to mosquito-prone environments . Man-made rural environments, such as livestock settings, may create ideal conditions for mosquitoes to thrive and interact with birds, wildlife, livestock, and humans, thereby increasing the risk of WNV outbreaks . WNV circulation has been documented in cattle and sheep, with cases of lethal WNV encephalitis reported in several mammalian species, including ruminants . Unlike birds, mammals cannot replicate WNV to infect mosquitoes, but they may serve as amplifying hosts for mosquito species with opportunistic feeding habits. Some Culex species, which are the most competent WNV vectors, feed on both birds and mammals . This makes them bridge vectors capable of transmitting the virus from avian reservoirs to susceptible mammals, including humans. Given these dynamics, cattle and livestock, in general, could potentially act as sentinel species for monitoring the spread of WNV . Molecular studies are crucial to confirm these findings and support enhanced prophylactic measures, such as implementing sustained disinfestation procedures in herds. As demonstrated by Odigie et al. , contact with animals may represent an occupational risk factor for WNV transmission, particularly in settings where rural and urban environments intersect. Notably, WNV has been isolated from both ixodid and argasid ticks, although their vector competence remains poorly understood, and the knowledge of this potential transmission route is limited. Of particular interest, soft ticks (the argasid species) can maintain the virus in vivo for over 3 months and have been shown to transmit it to mice , suggesting they may serve as a potential WNV reservoir. This represents a lesser-studied and not fully understood transmission pathway, which may have implications in occupational contexts involving livestock. Additionally, as noted by Bin et al. and supported by a case report by Fonseca et al. , contact with WNV-infected birds is another significant occupational risk factor. This risk may be further compounded by the potential for aerosol transmission, a pathway previously demonstrated experimentally . Direct exposure to infected animals, particularly through contact with potentially infectious bodily fluids or aerosols, poses possible risks to workers. While viremia levels in humans and horses are typically too low to sustain mosquito-borne transmission, the viral load in tissues from fatal cases may be sufficient to facilitate transmission through alternative routes. For example, Venter et al. reported that invasive autopsy procedures might increase the risk of mucosal exposure and subsequent infection. Cattle sheds and similar livestock enclosures may play a significant role in facilitating mosquito breeding and habitation, thereby increasing the exposure risk for livestock keepers. The warm and humid microclimate inside these structures, combined with the constant presence of animals, creates favorable conditions for mosquito survival and reproduction. Studies have shown that Culex pipiens , the primary vector of WNV in Europe, thrive in environments with high moisture levels and organic debris, both of which are commonly found in livestock facilities . Additionally, water accumulation in drinking troughs, irrigation channels, and manure pits provides suitable breeding sites for mosquito larvae, further contributing to vector abundance and persistence . Given that Culex pipiens exhibit nocturnal feeding behavior, individuals spending prolonged hours in cattle sheds during nighttime activities—such as monitoring parturition or animal health—may experience a higher risk of WNV transmission. This occupational exposure risk may help explain the higher WNV seroprevalence reported among livestock breeders compared to blood donors . The increased nighttime presence in animal shelters, where mosquitoes are most active, could lead to more frequent mosquito bites and sustained exposure to the virus . While our study did not explicitly assess whether livestock keepers regularly stay overnight in cattle sheds, previous reports indicate that such practices are common, particularly during critical periods such as calving or disease monitoring . This prolonged nighttime exposure, coupled with the high mosquito density in livestock environments, may represent a significant but underrecognized risk factor for WNV transmission, warranting further investigation through targeted surveys and enhanced vector control strategies in agricultural settings. Unexpectedly, unlike cattle breeders, military horse breeders have not shown any cases of WNV infection, although there was a case of infection in one of the bred horses. This finding can be attributed to several factors. Civilian livestock breeders typically work in less regulated environments, unlike military facilities that adhere to stringent hygiene and veterinary protocols, which often include mosquito control measures designed to reduce vector populations and mitigate WNV exposure risks . Additionally, military personnel are more likely to follow regulated health and safety practices, including the use of PPE and repellents, also due to high-risk knowledge and perception connected to their socioeconomic background . These combined factors could have created a higher-risk environment for WNV exposure among the livestock breeders investigated. Interestingly, the absence of cases among veterinarians may be attributable to the use of PPE, or adherence to veterinary biosafety protocols. Their professional training may also contribute to greater awareness and adherence to mosquito bite prevention strategies, including the use of PPE and insect repellents. Future research should further investigate these protective factors and assess whether similar trends are observed in other regions . The stepwise logistic regression analysis identifies three significant variables: age (40–59 years), use of repellents, and use of PPE. The use of repellents and PPE emerges as a critical protective measure, emphasizing the importance of consistent application in high-risk environments as a key preventive strategy against WNV, further underscoring the need for robust workplace safety protocols . Our study also revealed an age-related trend, with individuals aged 40–59 years exhibiting lower infection rates. This finding may be due to behavioral factors, such as greater adherence to protective measures, or variations in occupational tasks among different age groups. Further research is needed to explore potential age-related immunological responses or exposure patterns. The study’s cross-sectional design limits its ability to establish causal relationships between exposure variables and WNV seropositivity. The use of serological testing also presents challenges. While ELISA and MN/PRN assays are commonly employed in flavivirus research, they are prone to PRN cross-reaction with other flaviviruses, such as the Usutu virus (USUV), which can lead to false positives. This cross-reactivity may result in an overestimation of WNV seroprevalence, especially among participants previously exposed to other flaviviruses. However, according to the most recent national surveillance reports, no cases of USUV infection were reported in the Apulia region in humans, animals, or vectors during the study period. The absence of evidence for USUV circulation in this region suggests that cross-reactivity with USUV antibodies is unlikely to have significantly impacted our results. Furthermore, the confirmation of positive cases through MN and/or PRN assays enhances the specificity for WNV detection. While we acknowledge that the complete exclusion of cross-reactivity is challenging, the available epidemiological data strongly support the reliability of our findings . Despite these limitations, our study offers valuable insights. The findings highlight livestock breeders as a potentially valuable sentinel population for monitoring the spread of WNV in the region. Preventive and educational programs that emphasize the proper use of PPE and insect repellents could be crucial in reducing WNV transmission risks, particularly among high-risk occupational groups.
Future longitudinal studies should be conducted to track seroprevalence and immune response over time in outdoor workers. Moreover, investigation into additional risk factors for WNV seropositivity in occupational settings is essential. Identifying alternative occupational transmission routes of WNV is critical for developing effective protective measures for at-risk workers. Moreover, exploring the potential expansion of WNV hosts to include species beyond birds is crucial, especially given that infections have been identified in unexpected hosts, such as reptiles. Understanding these factors could support the development of more refined, evidence-based strategies tailored to high-risk worker groups, which would inform policy recommendations for outdoor occupational health in WNV-endemic regions.
|
The effects of vestibular vertical incisions on the tunnel technique: a randomized clinical trial for the treatment of Recession Type 1 single gingival recessions | 8f35bfd9-470d-476e-969c-d4ee0b677b18 | 11846288 | Dentistry[mh] | Gingival recession is defined by the apical migration/displacement of the gingival margin beyond the cementoenamel junction (CEJ) and associated attachment loss . This condition can be categorized into recession types 1, 2, and 3 (RT1, RT2, and RT3) according to the interproximal clinical attachment level and Cairo’s classification . Connective tissue graft (CTG)-based procedures are the preferred treatment choice for RT1 recessions, as they offer the best results regarding recession reduction and complete root coverage (CRC) . Tunneling flap procedures, often combined with CTG, have become increasingly popular in recent decades for treating gingival recessions due to their ability to preserve the integrity of papillary tissues without requiring incisions . The advantage of tunnel procedures is that the papillae are not severed , which contributes to improving blood supply , wound healing and esthetic outcomes . According to a meta-analysis on the efficacy of the tunnel technique in treating localized and multiple gingival recessions , the mean root coverage (MRC) and CRC of the tunnel technique for single RT 1 gingival recession defects were 84.58 ± 19.11% and 50.8%, respectively. For treatment of single maxillary gingival recessions, the respective MRC and CRC of a tunnel combined with a CTG technique were 77.4 ± 20.4% and 28.6% at 6 months , and significantly improved to 87.7 ± 18.4% and 50% at 24 months . Creating a continuous gingival tunnel from a sulcular incision is a highly technique-sensitive process . Additionally, traditional tunneling flaps without vertical releasing incisions provide limited coronal flap advancement . For the treatment of deep single gingival recessions (more than 3 mm in depth) , the tunnel technique faces clinical limitations for the graft may need to remain uncovered due to the limited flap mobility . The vestibular incision subperiosteal tunnel access (VISTA) technique has been introduced as a modification of the tunnel technique to address some of these potential limitations . The access of the vestibular incision facilitates the tunnel preparation and reduces gingival tension, thereby simplifying the process and increasing flap mobility . Moreover, a coronal-anchored suture was designed in the VISTA technique to maintain the coronally positioned gingiva during the healing period . Flowable composite resin was used to secure the suspended suture at the facial aspect of the tooth . To investigate the effects of an extra vestibular incision on the outcomes of treating multiple RT1 gingival recessions, our previously published study compared the modified tunnel technique with the VISTA technique, both combined with CTG . To control for confounding factors, the control group utilized the modified tunnel technique, which combines the tunnel approach with coronal-anchored sutures . It was reported that the MRC and CRC for the VISTA and modified tunnel groups at 12 months were 91.13 ± 16.96% and 70.97%, and 91.40 ± 13.53% and 67.86%, respectively, with no statistically significant differences between the groups . However, the effects of vestibular incision on root coverage of localized gingival recession have not been compared to the other tunnel technique. The present study aimed to compare the clinical, esthetic, and patient-reported outcomes of the VISTA + CTG and modified Tunnel + CTG for treating RT1 single gingival recessions, to find an answer to the true added value of the VISTA incision. We hypothesized the addition of vestibular incision access does not significantly improve the root coverage outcomes for RT1 single gingival recessions, nor does it result in significant differences in esthetic outcomes compared to the modified tunnel technique. Subjects The study was a randomized clinical trial with a 12-month follow-up period. The trial was prospectively registered at http://www.chictr.org.cn on 19/12/2015 (Registration number: ChiCTR-INR-16007845), and was conducted in accordance with the CONSORT statement. The protocol adhered to the principles outlined in the Declaration of Helsinki and was approved by the Peking University School and Hospital of Stomatology Institution Human Research Committee on February 13, 2015 (protocol PKUSSIRB-201519007). From January 2016 to December 2018, the study enrolled 24 patients with RT1 single gingival recessions from the Department of Periodontology. Informed consents were obtained from all participating patients. The following criteria were used for participant inclusion: (1) age: 18 to 65 years old; (2) presence of a single RT1 labial/buccal recession in a non-molar tooth with a recession depth ≥ 2 mm; (3) recession tooth without non-carious cervical lesion (NCCL) or with Class A- NCCL (presents a visible CEJ and no step) ; (4) full-mouth plaque score (FMPS) and full-mouth bleeding score (FMBS) ≤ 20% ; (5) no severe tooth malposition or rotation; (6) Non-smoker. The following criteria were used for participant exclusion: (1) restorations or caries in the labial/buccal cervical area of the enrolled tooth; (2) history of smoking; (3) pregnancy or breastfeeding at present; (4) uncontrolled diabetes mellitus, heart disease, hypertension, etc.; (5) sites with a history of periodontal surgery. Sample size The sample size calculation was predicated on a significance level of 0.05 and a statistical power of 80% for a two-sided test. The minimum clinically significant value (δ) of the recession depth between the treatment groups after 12 months was deemed to be 0.50 mm, and the standard deviation (SD, σ) was assumed as 0.45 mm . Based on these parameters, a sample of 12 patients per group was determined to be necessary. To account for potential dropouts, the target was to enroll 13 patients per group. Randomization, allocation, and blinding Participants were randomly assigned to either the VISTA or modified Tunnel group, using computer-generated random numbers (allocation ratio of 1:1). The allocation was kept hidden by sealed envelopes that were sequentially numbered and contained the specific grouping information. One clinician was responsible for registering the treatment assignment (Z.C). The envelopes were only opened after local anesthesia was administered, just prior to surgery. Examiners and patients were blinded to allocation. An experienced periodontist performed all surgical procedures (J.Z). Surgical procedures Additionally, all enrolled patients received periodontal initial therapy and were instructed to use a soft toothbrush with the roll technique for at least one month prior to surgery. The recipient site for the VISTA + CTG group was prepared using the method previously described by Zadeh and our research group . Briefly, to gain access to the tunnel, a vestibular incision was created. The location where the incision was made varied depending on the specific site being treated, and was usually positioned at the midline frenum for the maxillary anterior teeth or adjacent to the treated tooth . The vertical incision, which began 3–5 mm from the papilla tip and extended 8–10 mm in length, was made through the periosteum to elevate a subperiosteal tunnel. The dissection was extended mesially and distally one to two teeth, apically beyond the mucogingival junction, to facilitate advancing the gingiva at least 2 mm coronal to the CEJ . Through the single incision approach, a CTG was procured from the palate . The incision was made 2 mm below the gingival margin, parallel to the palatal midline . Following this, a uniform thickness of 1–1.5 mm CTG (8–9 mm in length and 5–6 mm in width) was obtained from a split-thickness dissection . The CTG was placed into the tunnel space through the vestibular access and its coronal margin was secured to the CEJ position using vertical mattress sutures (5 − 0 polypropylene, Ethicon LLC, Puerto Rico, USA). Following this, the gingiva was coronally positioned with the CTG completely submerged, and secured in a 2 mm over CEJ position with coronally anchored sutures . The coronally anchored sutures started with an interrupted suture at an approximately 2–3 mm apical position to the gingival margin (6 − 0 polypropylene, Ethicon LLC, Puerto Rico, USA). After the tooth was etched (Gluma etch 35 gel, Kulzer GmbH, Hanau, Germany) for 10 s and thoroughly washed and dried , the suture knot was bonded to the coronal facial surface of the tooth with flowable composite (Z350 XT flowable restorative, 3 M, MN, USA) . A graphical representation of the procedure can be seen in Fig. A to F. In the modified Tunnel + CTG group, the procedures were also performed as previously described . Briefly, a partial-thickness tunnel was prepared through a sulcular incision and extended similarly to the VISTA group, to ensure the flap could be positioned at least 2 mm coronal to the CEJ . If the tooth had an extremely thin gingival thickness, to reduce the gingival perforation during the tunnel preparation, part of the tunnel might be prepared as a full-thickness flap. After that, similarly to the VISTA group , a uniform thickness of 1–1.5 mm CTG (8–9 mm in length and 5–6 mm in width) was procured from the palate and placed into the tunnel space through the crevicular access using vertical mattress sutures secured to the CEJ position. Subsequently, the flap was coronally advanced and secured in place using the same coronally anchored sutures as in the VISTA + CTG technique . The procedure can be seen in Fig. A to F. Teeth with A- NCCLs were treated with root coverage surgery alone in both groups, without restoration, in accordance with the recommended treatment for localized A- NCCLs, which suggests that a root coverage procedure can be performed without restoration . Post-surgical protocol Patients were advised to use ice bags intermittently for 2–3 h after surgery to reduce swelling and were instructed to avoid hard chewing, flossing, or brushing at the surgical site before the sutures were removed. They were also directed to rinse their mouths twice a day, with 0.12% chlorhexidine acetate solution for at least 2 weeks. For pain management, patients were prescribed ibuprofen (GSK, Tianjin, China) 500 mg every 12 h for 3 days as needed. Additionally, the patients were also provided amoxicillin (Zhuhai United Laboratories, Zhongshan, China) 500 mg three times a day for 7 days according to literatures and university regulations for implantable materials. The sutures were removed two weeks post-surgery. Patients were then scheduled for follow-up appointments at 1-, 3-, 6-, and 12-month post-surgery for clinical evaluations and professional oral hygiene procedures. Clinical assessment and examiner calibration Baseline clinical measurements were recorded, and follow-up measurements were taken at 3, 6, and 12 months post-surgery. Two calibrated examiners (K.F and YX), who were unaware of the treatment allocation, performed these measurements. A calibration exercise was conducted to determine the intra- and inter-examiner reproducibility. Recession depth measurements were taken at 10 different recession defects for 10 patients (one RT 1 single recession in each patient), and these measurements were performed twice within a week. This included an additional 10 patients who were not part of the study. The reliability was determined using the interclass correlation coefficient (ICC). The intra-examiner ICCs of examiners 1 and 2 were 0.962 and 0.914, respectively, and the inter-examiner ICCs were 0.849 and 0.893. Calibration was accepted if ICC was>0.8. The parameters were evaluated at the center facial location and rounded off to the nearest 0.5 mm, with a periodontal probe (UNC 15, Hu-Friedy, Chicago, USA). The parameters were: (1) recession depth (Rec) ; (2) recession width at the CEJ (RW) ; (3) probing depth (PD); (4) clinical attachment level (CAL); (5) FMPS and FMBS ; (6) width of keratinized tissue (WKT); (7) presence/absence of non-carious cervical lesions; (8) gingival phenotype: evaluated by sulcus probing and categorized as either thin or thick . If the periodontal probe was visible through the gingival tissue, the phenotype was thin; If not visible, the phenotype was thick . (9) root coverage esthetic score (RES) : at 6- and 12-month post-surgery. (10) patient-reported outcomes: patient discomfort and postoperative pain were evaluated by a visual analog scale (VAS) questionnaire (supplementary file) immediately and 2 weeks post-surgery (on a scale of 0 to 10, 0 = no pain at all and 10 = extreme pain). Patient feedback on their esthetic satisfaction (VAS) (supplementary file) was also gathered during the 6- and 12-month follow-up appointments (on a scale of 0 to 10, 0 = completely unattractive and 10 = completely attractive). Study outcomes The main objective was to evaluate the effectiveness of both methods by comparing the reduction in Rec (RecRed) at the 12-month follow-up. The secondary objectives were as follows: (1) to compare the results of MRC, CRC, and esthetic results (RES) between the two groups; (2) to evaluate the difference in the patient-reported results by VAS between the two groups. Statistical analysis The data were analyzed using SPSS 24.0 (IBM, Armonk, USA). A threshold of 0.05 was established for the p -value to determine statistical significance. The statistical analysis of repeated measures across different treatments was conducted using Generalized Linear Mixed Models (GLMMs). For the statistical analysis of non-repeated measures, the following methods were applied: (1) For intragroup analysis, the paired t -test was utilized when the data followed a normal distribution. In cases where the data did not follow a normal distribution, the Wilcoxon signed-rank test was employed. (2) For intergroup comparisons, the choice of test depended on the nature of the data. If the data were normally distributed and homogenous, the independent-sample t -test was performed. For normally distributed data that were not homogeneous, a corrected independent-samples t -test was used. If the data did not follow a normal distribution, the Mann-Whitney U -test was applied. (3) the Fisher’s Exact Test was utilized to compare frequencies both between and within groups. The generalized linear regression analysis was performed to assess the association between the factors (surgical technique, Rec, RW, PD, CAL, Phenotype, and WKT) and MRC, CRC, and RecRed at 12-month follow-up. The study was a randomized clinical trial with a 12-month follow-up period. The trial was prospectively registered at http://www.chictr.org.cn on 19/12/2015 (Registration number: ChiCTR-INR-16007845), and was conducted in accordance with the CONSORT statement. The protocol adhered to the principles outlined in the Declaration of Helsinki and was approved by the Peking University School and Hospital of Stomatology Institution Human Research Committee on February 13, 2015 (protocol PKUSSIRB-201519007). From January 2016 to December 2018, the study enrolled 24 patients with RT1 single gingival recessions from the Department of Periodontology. Informed consents were obtained from all participating patients. The following criteria were used for participant inclusion: (1) age: 18 to 65 years old; (2) presence of a single RT1 labial/buccal recession in a non-molar tooth with a recession depth ≥ 2 mm; (3) recession tooth without non-carious cervical lesion (NCCL) or with Class A- NCCL (presents a visible CEJ and no step) ; (4) full-mouth plaque score (FMPS) and full-mouth bleeding score (FMBS) ≤ 20% ; (5) no severe tooth malposition or rotation; (6) Non-smoker. The following criteria were used for participant exclusion: (1) restorations or caries in the labial/buccal cervical area of the enrolled tooth; (2) history of smoking; (3) pregnancy or breastfeeding at present; (4) uncontrolled diabetes mellitus, heart disease, hypertension, etc.; (5) sites with a history of periodontal surgery. The sample size calculation was predicated on a significance level of 0.05 and a statistical power of 80% for a two-sided test. The minimum clinically significant value (δ) of the recession depth between the treatment groups after 12 months was deemed to be 0.50 mm, and the standard deviation (SD, σ) was assumed as 0.45 mm . Based on these parameters, a sample of 12 patients per group was determined to be necessary. To account for potential dropouts, the target was to enroll 13 patients per group. Participants were randomly assigned to either the VISTA or modified Tunnel group, using computer-generated random numbers (allocation ratio of 1:1). The allocation was kept hidden by sealed envelopes that were sequentially numbered and contained the specific grouping information. One clinician was responsible for registering the treatment assignment (Z.C). The envelopes were only opened after local anesthesia was administered, just prior to surgery. Examiners and patients were blinded to allocation. An experienced periodontist performed all surgical procedures (J.Z). Additionally, all enrolled patients received periodontal initial therapy and were instructed to use a soft toothbrush with the roll technique for at least one month prior to surgery. The recipient site for the VISTA + CTG group was prepared using the method previously described by Zadeh and our research group . Briefly, to gain access to the tunnel, a vestibular incision was created. The location where the incision was made varied depending on the specific site being treated, and was usually positioned at the midline frenum for the maxillary anterior teeth or adjacent to the treated tooth . The vertical incision, which began 3–5 mm from the papilla tip and extended 8–10 mm in length, was made through the periosteum to elevate a subperiosteal tunnel. The dissection was extended mesially and distally one to two teeth, apically beyond the mucogingival junction, to facilitate advancing the gingiva at least 2 mm coronal to the CEJ . Through the single incision approach, a CTG was procured from the palate . The incision was made 2 mm below the gingival margin, parallel to the palatal midline . Following this, a uniform thickness of 1–1.5 mm CTG (8–9 mm in length and 5–6 mm in width) was obtained from a split-thickness dissection . The CTG was placed into the tunnel space through the vestibular access and its coronal margin was secured to the CEJ position using vertical mattress sutures (5 − 0 polypropylene, Ethicon LLC, Puerto Rico, USA). Following this, the gingiva was coronally positioned with the CTG completely submerged, and secured in a 2 mm over CEJ position with coronally anchored sutures . The coronally anchored sutures started with an interrupted suture at an approximately 2–3 mm apical position to the gingival margin (6 − 0 polypropylene, Ethicon LLC, Puerto Rico, USA). After the tooth was etched (Gluma etch 35 gel, Kulzer GmbH, Hanau, Germany) for 10 s and thoroughly washed and dried , the suture knot was bonded to the coronal facial surface of the tooth with flowable composite (Z350 XT flowable restorative, 3 M, MN, USA) . A graphical representation of the procedure can be seen in Fig. A to F. In the modified Tunnel + CTG group, the procedures were also performed as previously described . Briefly, a partial-thickness tunnel was prepared through a sulcular incision and extended similarly to the VISTA group, to ensure the flap could be positioned at least 2 mm coronal to the CEJ . If the tooth had an extremely thin gingival thickness, to reduce the gingival perforation during the tunnel preparation, part of the tunnel might be prepared as a full-thickness flap. After that, similarly to the VISTA group , a uniform thickness of 1–1.5 mm CTG (8–9 mm in length and 5–6 mm in width) was procured from the palate and placed into the tunnel space through the crevicular access using vertical mattress sutures secured to the CEJ position. Subsequently, the flap was coronally advanced and secured in place using the same coronally anchored sutures as in the VISTA + CTG technique . The procedure can be seen in Fig. A to F. Teeth with A- NCCLs were treated with root coverage surgery alone in both groups, without restoration, in accordance with the recommended treatment for localized A- NCCLs, which suggests that a root coverage procedure can be performed without restoration . Patients were advised to use ice bags intermittently for 2–3 h after surgery to reduce swelling and were instructed to avoid hard chewing, flossing, or brushing at the surgical site before the sutures were removed. They were also directed to rinse their mouths twice a day, with 0.12% chlorhexidine acetate solution for at least 2 weeks. For pain management, patients were prescribed ibuprofen (GSK, Tianjin, China) 500 mg every 12 h for 3 days as needed. Additionally, the patients were also provided amoxicillin (Zhuhai United Laboratories, Zhongshan, China) 500 mg three times a day for 7 days according to literatures and university regulations for implantable materials. The sutures were removed two weeks post-surgery. Patients were then scheduled for follow-up appointments at 1-, 3-, 6-, and 12-month post-surgery for clinical evaluations and professional oral hygiene procedures. Baseline clinical measurements were recorded, and follow-up measurements were taken at 3, 6, and 12 months post-surgery. Two calibrated examiners (K.F and YX), who were unaware of the treatment allocation, performed these measurements. A calibration exercise was conducted to determine the intra- and inter-examiner reproducibility. Recession depth measurements were taken at 10 different recession defects for 10 patients (one RT 1 single recession in each patient), and these measurements were performed twice within a week. This included an additional 10 patients who were not part of the study. The reliability was determined using the interclass correlation coefficient (ICC). The intra-examiner ICCs of examiners 1 and 2 were 0.962 and 0.914, respectively, and the inter-examiner ICCs were 0.849 and 0.893. Calibration was accepted if ICC was>0.8. The parameters were evaluated at the center facial location and rounded off to the nearest 0.5 mm, with a periodontal probe (UNC 15, Hu-Friedy, Chicago, USA). The parameters were: (1) recession depth (Rec) ; (2) recession width at the CEJ (RW) ; (3) probing depth (PD); (4) clinical attachment level (CAL); (5) FMPS and FMBS ; (6) width of keratinized tissue (WKT); (7) presence/absence of non-carious cervical lesions; (8) gingival phenotype: evaluated by sulcus probing and categorized as either thin or thick . If the periodontal probe was visible through the gingival tissue, the phenotype was thin; If not visible, the phenotype was thick . (9) root coverage esthetic score (RES) : at 6- and 12-month post-surgery. (10) patient-reported outcomes: patient discomfort and postoperative pain were evaluated by a visual analog scale (VAS) questionnaire (supplementary file) immediately and 2 weeks post-surgery (on a scale of 0 to 10, 0 = no pain at all and 10 = extreme pain). Patient feedback on their esthetic satisfaction (VAS) (supplementary file) was also gathered during the 6- and 12-month follow-up appointments (on a scale of 0 to 10, 0 = completely unattractive and 10 = completely attractive). The main objective was to evaluate the effectiveness of both methods by comparing the reduction in Rec (RecRed) at the 12-month follow-up. The secondary objectives were as follows: (1) to compare the results of MRC, CRC, and esthetic results (RES) between the two groups; (2) to evaluate the difference in the patient-reported results by VAS between the two groups. The data were analyzed using SPSS 24.0 (IBM, Armonk, USA). A threshold of 0.05 was established for the p -value to determine statistical significance. The statistical analysis of repeated measures across different treatments was conducted using Generalized Linear Mixed Models (GLMMs). For the statistical analysis of non-repeated measures, the following methods were applied: (1) For intragroup analysis, the paired t -test was utilized when the data followed a normal distribution. In cases where the data did not follow a normal distribution, the Wilcoxon signed-rank test was employed. (2) For intergroup comparisons, the choice of test depended on the nature of the data. If the data were normally distributed and homogenous, the independent-sample t -test was performed. For normally distributed data that were not homogeneous, a corrected independent-samples t -test was used. If the data did not follow a normal distribution, the Mann-Whitney U -test was applied. (3) the Fisher’s Exact Test was utilized to compare frequencies both between and within groups. The generalized linear regression analysis was performed to assess the association between the factors (surgical technique, Rec, RW, PD, CAL, Phenotype, and WKT) and MRC, CRC, and RecRed at 12-month follow-up. A total of 26 patients were enrolled in the study, with 24 patients (9 males and 15 females) completing the 12-month follow-up. The mean age of the participants was 32.38 ± 7.8 years. Two patients, one in the VISTA group and one in the modified Tunnel group, were lost to follow-up at 6- or 12-month post-surgery due to relocation to other cities. Table summarizes the baseline characteristics. In the VISTA group, the mean Rec was 2.67 ± 0.81 mm, while in the modified Tunnel group, it was 2.54 ± 0.86 mm. No adverse events were recorded during the follow-up period. FMPS and FMBS were maintained at or below 20% throughout the study. Clinical results The clinical results at 6 and 12 months are outlined in Table . At 12 months, both groups achieved a significant reduction in Rec, with a RecRed of 2.38 ± 0.96 mm (95% CI (1.77, 2.98); P = 0.002) for the VISTA group and 2.08 ± 1.10 mm (95% CI (1.38, 2.29); P = 0.003) for the modified Tunnel groups, respectively. However, no statistically significant difference was found between the groups (mean difference (95% CI): 0 (-0.17, 0.17); P = 1). The CRC at 12 months for the VISTA and the modified Tunnel group was 75% (9/12) and 50% (6/12), respectively, with significantly better root coverage results in the VISTA group (OR (95% CI): 1.63E + 12 (1.63E + 12, 1.64E + 12); P < 0.001). However, the MRC results showed no statistically significant difference between the VISTA group (90.28 ± 18.06%) and the modified Tunnel group (81.25 ± 29.16%) at 12 months (mean difference (95% CI): 0.83 (-6.88, 8.54); P = 0.834). Furthermore, no significant differences were found between the groups regrading changes in RW, PD, CAL, and WKT, as shown in Table . Four of the 24 enrolled teeth had a Rec of more than 3 mm at baseline, with 3 teeth in the VISTA group (mean Rec: 3.83 ± 0.29 mm) and 1 tooth in the modified Tunnel group (mean Rec: 5 mm). At 12 months, the RecRed, CRC, and MRC for the VISTA and modified Tunnel groups were as follows: RecRed was 3.83 ± 0.29 mm and 4.50 mm, CRC was 100% and 0%, and MRC was 100% and 90%, respectively. Regarding gingival phenotype, 4 teeth in the VISTA group and 7 teeth in the modified tunnel group transitioned from a thin phenotype at baseline to a thick phenotype at the 6- and 12-month follow-ups. At 12 months, both groups achieved high RES scores: 8.75 ± 1.14 for the VISTA group and 7.75 ± 2.99 for the modified Tunnel group, respectively. However, this difference was not statistically significant (mean difference (95% CI): 1.00 (-0.98, 2.98); P = 0.786). Upon comparative analysis of each RES component, a significant difference was found regarding the soft tissue texture between the groups (VISTA group: 6/12, the modified tunnel group: 11/11, P = 0.014). The soft tissue texture score, which evaluates scar formation, was notably superior in the modified Tunnel group at 12-months, as shown in Table . The associations between MRC, CRC at 12-month and the factors (surgical technique, Rec, RW, PD, CAL, Phenotype and WKT) were all without statistical significance ( P > 0.05). And a positive association was observed between baseline Rec (β (95%CI): 0.98 (0.64, 1.32)) and CAL (β (95%CI): 0.69 (0.31, 1.08)) with RecRed at the 12-month follow-up, which was statistically significant ( P < 0.001 and P = 0.001, respectively). Furthermore, the baseline gingival phenotype was significantly associated with RecRed ( P = 0.013). In particular, the thin phenotype was associated with smaller RecRed values (β (95%CI): -1.00 (-1.77, -0.24)). Patient-reported results The patient-reported pain during and after surgery showed no statistically significant difference between the two groups (Table ). At the 12-month, the subjective esthetic satisfaction score was also not statistically different between the groups (mean difference (95% CI): -1.18 (-2.48, 0.11), P = 0.070; Table ). Non-carious cervical lesion and root coverage results The influence of a non-carious cervical lesion (NCCL) on the root coverage results was also analyzed, as shown in Table . Seven of the 24 enrolled teeth had a Class A- NCCL (presents a visible CEJ and no step). The Rec and WKT showed no significant differences between the teeth with NCCL and those without at baseline ( P > 0.05). At 12 months, neither the MRC nor the CRC significantly differed between the teeth with and without NCCL (MRC: 88.10 ± 15.10% vs. 84.80 ± 27.40%, P = 0.769; CRC: 57.14 ± 53.45% vs. 64.71 ± 49.26%, P = 1). Two out of 12 teeth in the VISTA group had NCCLs, with an MRC of 83.33% and a CRC of 50%. In contrast, five out of 12 teeth in the modified Tunnel group had NCCLs, with an MRC of 90% and a CRC of 60%. The clinical results at 6 and 12 months are outlined in Table . At 12 months, both groups achieved a significant reduction in Rec, with a RecRed of 2.38 ± 0.96 mm (95% CI (1.77, 2.98); P = 0.002) for the VISTA group and 2.08 ± 1.10 mm (95% CI (1.38, 2.29); P = 0.003) for the modified Tunnel groups, respectively. However, no statistically significant difference was found between the groups (mean difference (95% CI): 0 (-0.17, 0.17); P = 1). The CRC at 12 months for the VISTA and the modified Tunnel group was 75% (9/12) and 50% (6/12), respectively, with significantly better root coverage results in the VISTA group (OR (95% CI): 1.63E + 12 (1.63E + 12, 1.64E + 12); P < 0.001). However, the MRC results showed no statistically significant difference between the VISTA group (90.28 ± 18.06%) and the modified Tunnel group (81.25 ± 29.16%) at 12 months (mean difference (95% CI): 0.83 (-6.88, 8.54); P = 0.834). Furthermore, no significant differences were found between the groups regrading changes in RW, PD, CAL, and WKT, as shown in Table . Four of the 24 enrolled teeth had a Rec of more than 3 mm at baseline, with 3 teeth in the VISTA group (mean Rec: 3.83 ± 0.29 mm) and 1 tooth in the modified Tunnel group (mean Rec: 5 mm). At 12 months, the RecRed, CRC, and MRC for the VISTA and modified Tunnel groups were as follows: RecRed was 3.83 ± 0.29 mm and 4.50 mm, CRC was 100% and 0%, and MRC was 100% and 90%, respectively. Regarding gingival phenotype, 4 teeth in the VISTA group and 7 teeth in the modified tunnel group transitioned from a thin phenotype at baseline to a thick phenotype at the 6- and 12-month follow-ups. At 12 months, both groups achieved high RES scores: 8.75 ± 1.14 for the VISTA group and 7.75 ± 2.99 for the modified Tunnel group, respectively. However, this difference was not statistically significant (mean difference (95% CI): 1.00 (-0.98, 2.98); P = 0.786). Upon comparative analysis of each RES component, a significant difference was found regarding the soft tissue texture between the groups (VISTA group: 6/12, the modified tunnel group: 11/11, P = 0.014). The soft tissue texture score, which evaluates scar formation, was notably superior in the modified Tunnel group at 12-months, as shown in Table . The associations between MRC, CRC at 12-month and the factors (surgical technique, Rec, RW, PD, CAL, Phenotype and WKT) were all without statistical significance ( P > 0.05). And a positive association was observed between baseline Rec (β (95%CI): 0.98 (0.64, 1.32)) and CAL (β (95%CI): 0.69 (0.31, 1.08)) with RecRed at the 12-month follow-up, which was statistically significant ( P < 0.001 and P = 0.001, respectively). Furthermore, the baseline gingival phenotype was significantly associated with RecRed ( P = 0.013). In particular, the thin phenotype was associated with smaller RecRed values (β (95%CI): -1.00 (-1.77, -0.24)). The patient-reported pain during and after surgery showed no statistically significant difference between the two groups (Table ). At the 12-month, the subjective esthetic satisfaction score was also not statistically different between the groups (mean difference (95% CI): -1.18 (-2.48, 0.11), P = 0.070; Table ). The influence of a non-carious cervical lesion (NCCL) on the root coverage results was also analyzed, as shown in Table . Seven of the 24 enrolled teeth had a Class A- NCCL (presents a visible CEJ and no step). The Rec and WKT showed no significant differences between the teeth with NCCL and those without at baseline ( P > 0.05). At 12 months, neither the MRC nor the CRC significantly differed between the teeth with and without NCCL (MRC: 88.10 ± 15.10% vs. 84.80 ± 27.40%, P = 0.769; CRC: 57.14 ± 53.45% vs. 64.71 ± 49.26%, P = 1). Two out of 12 teeth in the VISTA group had NCCLs, with an MRC of 83.33% and a CRC of 50%. In contrast, five out of 12 teeth in the modified Tunnel group had NCCLs, with an MRC of 90% and a CRC of 60%. In this randomized clinical trial, the effects of vestibular incision on clinical, esthetic, and patient-reported outcomes in the treatment of localized RT1 gingival recession was evaluated. The study found that both procedures effectively reduced gingival recession over a 12-month period. However, the VISTA group demonstrated superior outcomes in terms of complete root coverage. This might be explained that the vestibular incision facilitates tunnel preparation and reduces flap tension during coronal movement for single gingival recessions. CRC has been reported to positively correlate with the gingival position at the end of surgery . Moreover, coronal positioning of the flap requires its relaxation and passive adaptation without tension over CEJ . In the present study, although both groups aimed to position the flap at least 2 mm coronal to the CEJ, residual tension in the tunnel flap might still have been present once the procedures was considered complete . The residual tension could potentially impact the CRC results. Interestingly, our previously published randomized clinical trial did not show additional benefits in terms of root coverage when using a vestibular incision to treat multiple RT1 recessions . It appears that the vestibular incision is more effective for single recessions than for multiple ones. We hypothesize that, in the case of multiple gingival recessions, the tunneling procedure, which involves at least four papillae, results in reduced flap tension . The extended tunnel facilitates easier mobilization of the flap , making the sulcular incision-based tunneling procedure less challenging for multiple defects than for single ones . This finding is consistent with a systematic review, which reported lower MRC (82.8%) and CRC (47.2%) outcomes for single recessions than for multiple recessions (MRC: 87.9%; CRC: 57.5%), when treated by the tunnel method . In cases of localized gingival recessions, limited flap mobility may lead to a significant portion of the graft remaining uncovered, potentially resulting in necrosis, or it may result in insufficient coronal advancement of the flap or increased flap tension, both of which could compromise root coverage outcomes . Consequently, tunnel procedures without vertical incisions are more advantageous for the treatment of multiple recessions than single ones . Zuhr et al. initially discouraged the use of the classical tunnel for treating deep localized recessions of more than 5 mm in depth , later revising this threshold to 3 mm , based on clinical experience. Techniques involving vertical incisions may be necessary for treating deep single recessions . Therefore, the VISTA technique, which incorporates a vestibular incision, may be a preferable option for the treatment of deep localized defects. Both groups achieved satisfactory esthetic outcomes (RES score: VISTA: 8.75 ± 1.14, modified Tunnel: 7.75 ± 2.99). These scores align with the average scores reported in other studies on the tunnel technique combined with CTG, which range from 7.3 to 9.3 . In the VISTA group, 50% of the vertical incisions resulted in scar formation, leading to a significantly lower soft tissue texture score compared to the modified Tunnel group. This is an unsurprisingly disadvantage of the vertical incisions. They may also compromise the lateral blood supply to the flap , which is crucial in root coverage procedures, particularly for the successful integration of the connective tissue graft . However, the negative influence of vertical incisions on esthetics and root coverage outcomes remains controversial in the literature . A randomized controlled trial reported that esthetic and root coverage outcomes were comparable with or without vertical incisions for the treatment of gingival recessions . In the present study, the vestibular vertical incision, typically located in a relatively apical area, made the scar generally unnoticeable to patients, even when smiling. Consequently, the scar primarily affects the clinician’s evaluation rather than the patient’s perception. To minimize the impact of scar formation in the VISTA group, the vestibular vertical incision was made at the frenum or distal to the canine . A microsurgical approach, including the use of micro-instruments, sutures and loupes, was also used to help reduce scarring . Additionally, certain biologics, such as hyaluronic acid, might improve soft tissue quality and further minimize scarring in root coverage procedures . However, they were not used in this study to avoid introducing additional confounding factors. In the present study, we observed that a thin phenotype was associated with lower RecRed values at the 12-month follow-up, which aligns with existing literature. Previous reports have also indicated a significant correlation between increased flap thickness and improved RecRed outcomes at the 60-month post-treatment evaluation in cases involving tunnel procedures combined with CTG . Additionally, a positive association has been documented between weighted flap thickness and CRC and MRC . This is attributed to the fact that thicker gingiva contains a greater number of patent vessels and facilitates surgical manipulation . Based on these findings, one modification of tunnel is to elevate a full-thickness flap, as maintaining the thickness of the flap in the marginal gingiva may improve clinical outcomes . A- NCCLs with a visible CEJ and a shallow defect (< 0.5 mm), when combined with gingival recessions, can often be treated using root coverage procedures alone . However, if the NCCL is deeper and exhibits a “V” shape, a CTG alone may not suffice to fill the gap . Studies have shown that teeth with NCCLs (root surface discrepancy ≥ 1 mm) treated with CTG-based procedures are more likely to experience gingival recession recurrence after 20 years of follow-up . In cases of recurrence, the gingival margin may “fall” into the NCCL cavity . Therefore, a deep, V-shaped defect may necessitate a combination of root coverage procedures and restorations . This combined approach not only achieves better results in gingival margin contour but also reduces dentin hypersensitivity for NCCLs affect both root and crown surfaces . The primary limitation of our study was the small sample size, which necessitates further large-scale, prospective studies to confirm our findings. Another limitation concerns the blinding of the examiners. Given that the VISTA procedure involves a vertical incision, there was a notable potential for scar formation, which may have assisted the examiner in identifying the surgical groups. Moreover, the current research did not document or compare the duration of the surgical procedures between the groups, which could potentially highlight a benefit of the VISTA method in accelerating the process of tunnel preparation. Additionally, only 7 of the 24 teeth had NCCLs, which might limit the statistical power of the NCCL related analysis. Based on the findings of the current study, the VISTA technique combined with CTG is more effective for the treating deep single recessions compared to the modified technique, particularly for patients unconcerned about scar formation. For NCCLs with a visible CEJ and a shallow defect (< 0.5 mm), CTG-based root coverage procedures without restorations may be a successful treatment option. For the treatment of single RT1 gingival recessions, both techniques were effective. However, the VISTA technique demonstrated superior results in achieving complete root coverage compared to the modified tunnel technique at the 12-month follow-up. Below is the link to the electronic supplementary material. Supplementary Material 1 |
Common practical questions – and answers – at the British Association
for Psychopharmacology child and adolescent psychopharmacology
course | ae25e55f-2da7-4775-aead-f01ba9105e9c | 9912307 | Pharmacology[mh] | Since 2001, the British Association for Psychopharmacology (BAP) has run a course (‘clinical certificate’) on child and adolescent psychopharmacology, that has become more and more popular over the years and has attracted prescribers (around 140/year) from across the United Kingdom and abroad. As Faculty of the recent sessions of the course, we have reported here the most frequent practical questions that we have been asked by the delegates, and our answers. In general, the questions were related to aspects/topics for which there is not a solid evidence base yet. Therefore, our answers reflect the available evidence alongside our clinical expertise and expert opinion. We have grouped the questions and answers by disorder to which they refer, in alphabetical order. In writing this article we have adhered to the Neuroscience-based Nomenclature (NbN), a version of which (NbN C&A) is available also for child and adolescent psychopharmacology . The NbN (C&A) prompts researchers and clinicians to label psychotropic medications based on their putative psychopharmacological mechanisms of action, rather than referring to their putative indications (e.g. antidepressants or antipsychotics ), which may be misleading. Accordingly, we have indicated the mode of action when we first mention a medication. For ease of reading, we have however used the former, traditional terminology in the remaining parts of the text. summarises all the questions and answers. Can I use doses of psychostimulants beyond the maximum licensed or recommended ones? The maximum recommended doses of psychostimulants (which act by inhibiting the reuptake of dopamine and norepinephrine and, for amphetamines, inducing their release) for the treatment of attention-deficit/ hyperactivity disorder (ADHD) in guidelines or formularies may be higher than the maximum licensed doses by regulatory agencies such the Food and Drug Administration (FDA) or the European Agency. More specifically, the maximum licensed dose of methylphenidate for children (except for osmotic release and prolonged release formulations, see below) is 60 mg/day, while the British National formulary (BNF) recommends a dose of up to 90 mg/day, under the direction of a specialist. For osmotic-release (Concerta® XL, Janseen Cilag, High Wycombe, Buckinghamshire, UK) and prolonged-release (e.g. Xaggitin® XL, Ethypharm, High Wycombe, Buckinghamshire, UK and Delmosart® prolonged-release tablet, Accord UK Ltd, Barnstaple, Devon, UK) formulations of methylphenidate, the maximum license dose is 54 mg/day, but the BNF mentions a maximum of 108 mg/day for Concerta® XL, in line with other clinical guidelines, for example, those from the Canadian ADHD Resource Alliance (caddra.ca). For lisdexamfetamine, both the maximum licensed and the BNF recommended dose is 70 mg/day. Many prescribers will use doses above the maximum licensed ones. Some of them will also use doses beyond the maximum recommended in guidelines/formularies. Currently, there is no solid, meta-analytic evidence to inform if, and to what extent, doses beyond the recommended ones are safe and bring additional efficacy/effectiveness. Some experts agree that, while it should not be a standard, routine practice, using doses beyond the maximum recommended ones could be considered when the patient has presented with a partial response, there is only some degree of improvement at the maximum recommended dose, tolerability is good, and the aim is to optimise the response . This could be considered particularly in patients with overweight when a partial response was obtained at the licensed dose. If doses beyond those recommended are used, a careful monitoring of blood pressure, heart rate, height and weight should be implemented. What shall I do if a patient does not respond to psychostimulants? Following poor response to two psychostimulants (methylphenidate, MPH and amphetamines, AMPH – including lisdexamfetamine, LDX), some clinicians would consider second- or third-line approved compounds, unlicensed medications for ADHD, or combinations of different agents. However, a number of factors should be assessed before switching to alternative medications or using polypharmacy. Indeed, it should be highlighted that the majority of patients with ADHD respond to one or both classes of psychostimulants, when used properly. A comparative review of randomised controlled trials (RCTs) found that approximately 41% of children treated with immediate-release stimulants responded equally well to AMPH or MPH, 28% responded better to AMPH, 16% had a better response to MPH and 15% did not respond to either medication. A more recent review concluded that approximately 91% of individuals with ADHD respond to either or both class of stimulants . (We note that, as RCTs often exclude participants with specific comorbidities that decrease the rate of response, the response rate in patients seen in daily clinical practice may be lower.) Therefore, before considering alternative agents and/or polypharmacy, the following should be considered: (a) Have I titrated properly? While some patients will respond well to low or moderate doses, others will need higher ones, regardless of their age and weight. Of note, meta-analytic evidence from flexible-dose trials for both MPH and AMPH demonstrated increased efficacy and reduced likelihood of discontinuations for any reason with increasing stimulant doses ; (b) Is this drug/preparation working well at any times during the day or do I need to change the dose or preparation to get a more comprehensive coverage? Sometimes, poor response may be detected only in specific periods of the day, when the medication effect has worn off; (c) Am I targeting the right symptoms? Psychostimulants are in general highly efficacious/effective on the core symptoms of ADHD (i.e. inattention, hyperactivity, impulsivity) , not necessarily on other problems (e.g. oppositional behaviour/emotional dysregulation); (d) Is the patient showing tolerance? Evidence from clinical studies, for example, shows the need to increase the dose of stimulants over time in order to maintain therapeutic response. Additionally, neuroimaging studies (e.g. PET studies; ) point to an increase in dopamine reuptake receptors in adults with ADHD treated for up to 12 months with stimulants. This evidence suggests that tolerance may happen during treatment with psychostimulants, even though more research is needed to gain a better understanding of the exact percentage of patients who develop tolerance, their clinical characteristics and how to manage tolerance effectively. Some experts, for example, suggest decreasing the dose or temporarily (for a few weeks) stop the psychostimulant to overcome the tolerance issues; (e) What else is going on in patient’s life/family life? A comprehensive formulation, beyond diagnosis, is key here; (f) Have I missed any comorbidity? Some comorbidities, for example, autism spectrum disorder , are associated with lower chances of response; (g) Is the diagnosis correct? Only after all these aspects have been assessed, the prescriber should consider: (1) second-line medications (atomoxetine – which selectively inhibits the norepinephrine transporter, and guanfacine – that selectively stimulates alpha-2 adrenergic receptors, or clonidine, that selectively stimulates alpha-2A adrenergic receptors); (2) augmenting agents (guanfacine or clonidine XR); (3) other agents, under specialistic advice/supervision, for which RCTs provide preliminary evidence of efficacy (e.g. bupropion, a non-competitive antagonist of nicotinic acetylcholine receptors) . Does the concomitant use of cannabis impact on the effectiveness and tolerability of psychostimulants? The evidence on the effectiveness of psychostimulants in marijuana users and the tolerability of the combination psychostimulants–marijuana is limited. In a RCT of OROS-methylphenidate and cognitive behavioural therapy (CBT) in youths with substance use disorder (in the majority of the cases (66%), marijuana), there were no significant differences, in terms of efficacy, between participants assigned to OROS-methylphenidate plus CBT and those in the placebo plus CBT, when considering the primary outcome (scores on the ADHD-RS completed by the clinicians), even though ADHD-RS scores rated by parents were significantly lower in adolescents treated with OROS-MPH + CBT compared to those in the placebo + CBT arm at 8 and 16 weeks. The tolerability of OROS-methylphenidate was not substantially different compared to what it would be expected with OROS-methylphenidate alone . Another RCT showed addictive effects of MPH and tetrahydrocannabinol on heart rate, but not on systolic and diastolic blood pressure . Therefore, even though there is no solid evidence, psychostimulants may be overall well tolerated, but their effectiveness may be decreased by the concomitant use of marijuana. This should be discussed with the patient at the initial assessment and during the follow-up visits. If a psychostimulant is prescribed, it is crucial to monitor its possible misuse carefully. The chances of misuse can be decreased by using long-acting as opposed to immediate-release formulations. It should be noted that a subgroup of youths who use marijuana also tend to misuse other illicit substances, which may further impact on the tolerability of psychostimulants. Therefore, prescribers should comprehensively screen for the use of a variety of illicit drugs. The maximum recommended doses of psychostimulants (which act by inhibiting the reuptake of dopamine and norepinephrine and, for amphetamines, inducing their release) for the treatment of attention-deficit/ hyperactivity disorder (ADHD) in guidelines or formularies may be higher than the maximum licensed doses by regulatory agencies such the Food and Drug Administration (FDA) or the European Agency. More specifically, the maximum licensed dose of methylphenidate for children (except for osmotic release and prolonged release formulations, see below) is 60 mg/day, while the British National formulary (BNF) recommends a dose of up to 90 mg/day, under the direction of a specialist. For osmotic-release (Concerta® XL, Janseen Cilag, High Wycombe, Buckinghamshire, UK) and prolonged-release (e.g. Xaggitin® XL, Ethypharm, High Wycombe, Buckinghamshire, UK and Delmosart® prolonged-release tablet, Accord UK Ltd, Barnstaple, Devon, UK) formulations of methylphenidate, the maximum license dose is 54 mg/day, but the BNF mentions a maximum of 108 mg/day for Concerta® XL, in line with other clinical guidelines, for example, those from the Canadian ADHD Resource Alliance (caddra.ca). For lisdexamfetamine, both the maximum licensed and the BNF recommended dose is 70 mg/day. Many prescribers will use doses above the maximum licensed ones. Some of them will also use doses beyond the maximum recommended in guidelines/formularies. Currently, there is no solid, meta-analytic evidence to inform if, and to what extent, doses beyond the recommended ones are safe and bring additional efficacy/effectiveness. Some experts agree that, while it should not be a standard, routine practice, using doses beyond the maximum recommended ones could be considered when the patient has presented with a partial response, there is only some degree of improvement at the maximum recommended dose, tolerability is good, and the aim is to optimise the response . This could be considered particularly in patients with overweight when a partial response was obtained at the licensed dose. If doses beyond those recommended are used, a careful monitoring of blood pressure, heart rate, height and weight should be implemented. Following poor response to two psychostimulants (methylphenidate, MPH and amphetamines, AMPH – including lisdexamfetamine, LDX), some clinicians would consider second- or third-line approved compounds, unlicensed medications for ADHD, or combinations of different agents. However, a number of factors should be assessed before switching to alternative medications or using polypharmacy. Indeed, it should be highlighted that the majority of patients with ADHD respond to one or both classes of psychostimulants, when used properly. A comparative review of randomised controlled trials (RCTs) found that approximately 41% of children treated with immediate-release stimulants responded equally well to AMPH or MPH, 28% responded better to AMPH, 16% had a better response to MPH and 15% did not respond to either medication. A more recent review concluded that approximately 91% of individuals with ADHD respond to either or both class of stimulants . (We note that, as RCTs often exclude participants with specific comorbidities that decrease the rate of response, the response rate in patients seen in daily clinical practice may be lower.) Therefore, before considering alternative agents and/or polypharmacy, the following should be considered: (a) Have I titrated properly? While some patients will respond well to low or moderate doses, others will need higher ones, regardless of their age and weight. Of note, meta-analytic evidence from flexible-dose trials for both MPH and AMPH demonstrated increased efficacy and reduced likelihood of discontinuations for any reason with increasing stimulant doses ; (b) Is this drug/preparation working well at any times during the day or do I need to change the dose or preparation to get a more comprehensive coverage? Sometimes, poor response may be detected only in specific periods of the day, when the medication effect has worn off; (c) Am I targeting the right symptoms? Psychostimulants are in general highly efficacious/effective on the core symptoms of ADHD (i.e. inattention, hyperactivity, impulsivity) , not necessarily on other problems (e.g. oppositional behaviour/emotional dysregulation); (d) Is the patient showing tolerance? Evidence from clinical studies, for example, shows the need to increase the dose of stimulants over time in order to maintain therapeutic response. Additionally, neuroimaging studies (e.g. PET studies; ) point to an increase in dopamine reuptake receptors in adults with ADHD treated for up to 12 months with stimulants. This evidence suggests that tolerance may happen during treatment with psychostimulants, even though more research is needed to gain a better understanding of the exact percentage of patients who develop tolerance, their clinical characteristics and how to manage tolerance effectively. Some experts, for example, suggest decreasing the dose or temporarily (for a few weeks) stop the psychostimulant to overcome the tolerance issues; (e) What else is going on in patient’s life/family life? A comprehensive formulation, beyond diagnosis, is key here; (f) Have I missed any comorbidity? Some comorbidities, for example, autism spectrum disorder , are associated with lower chances of response; (g) Is the diagnosis correct? Only after all these aspects have been assessed, the prescriber should consider: (1) second-line medications (atomoxetine – which selectively inhibits the norepinephrine transporter, and guanfacine – that selectively stimulates alpha-2 adrenergic receptors, or clonidine, that selectively stimulates alpha-2A adrenergic receptors); (2) augmenting agents (guanfacine or clonidine XR); (3) other agents, under specialistic advice/supervision, for which RCTs provide preliminary evidence of efficacy (e.g. bupropion, a non-competitive antagonist of nicotinic acetylcholine receptors) . The evidence on the effectiveness of psychostimulants in marijuana users and the tolerability of the combination psychostimulants–marijuana is limited. In a RCT of OROS-methylphenidate and cognitive behavioural therapy (CBT) in youths with substance use disorder (in the majority of the cases (66%), marijuana), there were no significant differences, in terms of efficacy, between participants assigned to OROS-methylphenidate plus CBT and those in the placebo plus CBT, when considering the primary outcome (scores on the ADHD-RS completed by the clinicians), even though ADHD-RS scores rated by parents were significantly lower in adolescents treated with OROS-MPH + CBT compared to those in the placebo + CBT arm at 8 and 16 weeks. The tolerability of OROS-methylphenidate was not substantially different compared to what it would be expected with OROS-methylphenidate alone . Another RCT showed addictive effects of MPH and tetrahydrocannabinol on heart rate, but not on systolic and diastolic blood pressure . Therefore, even though there is no solid evidence, psychostimulants may be overall well tolerated, but their effectiveness may be decreased by the concomitant use of marijuana. This should be discussed with the patient at the initial assessment and during the follow-up visits. If a psychostimulant is prescribed, it is crucial to monitor its possible misuse carefully. The chances of misuse can be decreased by using long-acting as opposed to immediate-release formulations. It should be noted that a subgroup of youths who use marijuana also tend to misuse other illicit substances, which may further impact on the tolerability of psychostimulants. Therefore, prescribers should comprehensively screen for the use of a variety of illicit drugs. What do I do if a young person has depression and has not responded to two SSRIs and CBT? National Institute for health and Care Excellence (NICE) guidelines currently recommend CBT as a first-line treatment for moderate-to-severe depression, fluoxetine, a selective serotonin reuptake inhibitor (SSRI), as a first-line antidepressant and, if that is not effective, sertraline or citalopram . There is limited evidence for other treatments beyond this, other than interpersonal psychotherapy , which is often not available. The first thing to do when a patient is treatment-resistant is to revisit the formulation and diagnosis. Is this really depression, or does the pattern of symptoms since you started seeing the patient suggest a personality disorder as a more appropriate diagnosis? Are there any social factors that have not been addressed? Then review whether the patient has indeed had all the treatments to which they appear to be ‘resistant’. Did they stop fluoxetine after a few days of mild side effects and would they be prepared to try again? Did they have a good number of sessions of evidence-based psychological therapy from a fully trained and supervised therapist? If they stopped after a few sessions, was this due to a problem in the therapy or the therapist and would they consider trying a different therapist? If the patient truly does have depression, and you have tried all the evidence-based treatments, there is no evidence to guide you. It would be reasonable to try two SSRIs up to the maximum tolerated dose, within BNF limits : although there is no evidence in favour of or against this, there is some evidence for a dose increase in adult depression; and there is plenty of safety data from paediatric OCD studies which use high doses of SSRIs. If the patient is nearly 18, it may be reasonable to try antidepressants shown to work in studies of 18–64 years old but not in studies of 12–17 years old, such as mirtazapine (adrenergic alpha-2 auto receptors and heteroceptors antagonist and 5-HT2 and 5-HT3 receptors blocker) or venlafaxine (serotonin and norepinephrine inhibitor); caution should be used with the latter due to possible increased risk of suicidal ideation, and for that reason it is especially important to seek peer advice. It may be worth trying an augmenting agent that has evidence in adults such as lamotrigine, an atypical antipsychotic, lithium, or l -thyroxine . In all cases, it is important to discuss and document potential benefits and harms with patients and their families, and the lack of robust evidence or marketing authorisation for any drug treatment other than fluoxetine 20 mg once daily. The practicality of blood monitoring in local services may imply that it is not feasible to prescribe antipsychotics and (especially) lithium. It is also important to discuss marketing authorisation regulations for potential antidepressants in your own country (for example in the United Kingdom, marketing authorisation for adolescent depression is only present for fluoxetine). If you are unsure, it is important to discuss cases with your consultant peers and to document this. If a patient has severe side effects (such as suicidality) on one SSRI, is it safe to try a second SSRI? While the majority of adolescents given an SSRI have mild, transient side effects, a small number of patients have rarer and more dangerous side effects, including suicidal thoughts , homicidal thoughts, and clotting abnormalities . Many patients and their families (and their prescribers) will choose to stop the medication in this situation. There is then the question of whether another SSRI should be used. There is no published evidence regarding this. However, in consensus discussions, several psychiatrists have tried the patient on a second SSRI and this has not led to the same side effect. It is likely the patient will have increased risk of, for example, suicidality on a second SSRI compared to a patient who has never had an SSRI, but it could be worth trying, especially in more severe depression which itself carries significant suicide risk. As always, it is important to discuss potential benefits and risks with patients and families and make decisions jointly. It is essential to document these discussions. It is also especially important to follow these patients up regularly when starting a second SSRI and have a clear plan in place for what is to be done if these side effects recur. If unsure, discuss the case with consultant peers and document this. It is also important to take into account the pharmacological profile of the first and second SSRI. Among the SSRIs, sertraline and citalopram/escitalopram have significantly higher 5-HT specificity than fluoxetine. So, if sertraline caused dangerous side effects, it would be better to try the more different fluoxetine than citalopram as second line. I see an adolescent with an anxiety disorder who a GP has started on propranolol. What should I do? SSRIs are by far the most effective medication for anxiety disorders in children, adolescents and adults . General practitioners (GPs) are often (and rightly) cautious about prescribing SSRIs in children and adolescents. This leads some GPs to prescribe propranolol (a beta blocker) instead, for anxiety disorder. There are two problems with this: 1) propranolol is not an effective treatment for anxiety disorders ; 2) propranolol has significant side effects and is particularly dangerous in overdose. Its use seems to have stemmed from the fact that it can reduce the physiological symptoms of anxiety. However, in most cases, doing this does not reduce the feelings of anxiety. Therefore, it is much better to use SSRIs, which are both effective and safer. It may be better still to offer CBT, which has similar efficacy to SSRIs , but is more likely to keep the patient well long-term, and has no physical side effects. If we see a patient with an anxiety disorder who is taking propranolol, we should sensitively explain that it is not effective, and offer them an SSRI instead if they would like medication. If the child/adolescent has recovered from the anxiety by the time we see them, it is still worth stopping the propranolol as they may no longer need it. It is also important to educate the GP that propranolol is neither safe nor effective for anxiety disorders. One possible exception is children/adolescents with autism spectrum disorder (ASD) and anxiety. Some ASD experts report that children with ASD are much more sensitive to internal somatic stimuli, and if they have pronounced physical symptoms of anxiety, reducing these with propranolol may make a big difference and reduce overall anxiety. However, there is no clear evidence for this, and it is probably more appropriate to try an SSRI first. National Institute for health and Care Excellence (NICE) guidelines currently recommend CBT as a first-line treatment for moderate-to-severe depression, fluoxetine, a selective serotonin reuptake inhibitor (SSRI), as a first-line antidepressant and, if that is not effective, sertraline or citalopram . There is limited evidence for other treatments beyond this, other than interpersonal psychotherapy , which is often not available. The first thing to do when a patient is treatment-resistant is to revisit the formulation and diagnosis. Is this really depression, or does the pattern of symptoms since you started seeing the patient suggest a personality disorder as a more appropriate diagnosis? Are there any social factors that have not been addressed? Then review whether the patient has indeed had all the treatments to which they appear to be ‘resistant’. Did they stop fluoxetine after a few days of mild side effects and would they be prepared to try again? Did they have a good number of sessions of evidence-based psychological therapy from a fully trained and supervised therapist? If they stopped after a few sessions, was this due to a problem in the therapy or the therapist and would they consider trying a different therapist? If the patient truly does have depression, and you have tried all the evidence-based treatments, there is no evidence to guide you. It would be reasonable to try two SSRIs up to the maximum tolerated dose, within BNF limits : although there is no evidence in favour of or against this, there is some evidence for a dose increase in adult depression; and there is plenty of safety data from paediatric OCD studies which use high doses of SSRIs. If the patient is nearly 18, it may be reasonable to try antidepressants shown to work in studies of 18–64 years old but not in studies of 12–17 years old, such as mirtazapine (adrenergic alpha-2 auto receptors and heteroceptors antagonist and 5-HT2 and 5-HT3 receptors blocker) or venlafaxine (serotonin and norepinephrine inhibitor); caution should be used with the latter due to possible increased risk of suicidal ideation, and for that reason it is especially important to seek peer advice. It may be worth trying an augmenting agent that has evidence in adults such as lamotrigine, an atypical antipsychotic, lithium, or l -thyroxine . In all cases, it is important to discuss and document potential benefits and harms with patients and their families, and the lack of robust evidence or marketing authorisation for any drug treatment other than fluoxetine 20 mg once daily. The practicality of blood monitoring in local services may imply that it is not feasible to prescribe antipsychotics and (especially) lithium. It is also important to discuss marketing authorisation regulations for potential antidepressants in your own country (for example in the United Kingdom, marketing authorisation for adolescent depression is only present for fluoxetine). If you are unsure, it is important to discuss cases with your consultant peers and to document this. While the majority of adolescents given an SSRI have mild, transient side effects, a small number of patients have rarer and more dangerous side effects, including suicidal thoughts , homicidal thoughts, and clotting abnormalities . Many patients and their families (and their prescribers) will choose to stop the medication in this situation. There is then the question of whether another SSRI should be used. There is no published evidence regarding this. However, in consensus discussions, several psychiatrists have tried the patient on a second SSRI and this has not led to the same side effect. It is likely the patient will have increased risk of, for example, suicidality on a second SSRI compared to a patient who has never had an SSRI, but it could be worth trying, especially in more severe depression which itself carries significant suicide risk. As always, it is important to discuss potential benefits and risks with patients and families and make decisions jointly. It is essential to document these discussions. It is also especially important to follow these patients up regularly when starting a second SSRI and have a clear plan in place for what is to be done if these side effects recur. If unsure, discuss the case with consultant peers and document this. It is also important to take into account the pharmacological profile of the first and second SSRI. Among the SSRIs, sertraline and citalopram/escitalopram have significantly higher 5-HT specificity than fluoxetine. So, if sertraline caused dangerous side effects, it would be better to try the more different fluoxetine than citalopram as second line. SSRIs are by far the most effective medication for anxiety disorders in children, adolescents and adults . General practitioners (GPs) are often (and rightly) cautious about prescribing SSRIs in children and adolescents. This leads some GPs to prescribe propranolol (a beta blocker) instead, for anxiety disorder. There are two problems with this: 1) propranolol is not an effective treatment for anxiety disorders ; 2) propranolol has significant side effects and is particularly dangerous in overdose. Its use seems to have stemmed from the fact that it can reduce the physiological symptoms of anxiety. However, in most cases, doing this does not reduce the feelings of anxiety. Therefore, it is much better to use SSRIs, which are both effective and safer. It may be better still to offer CBT, which has similar efficacy to SSRIs , but is more likely to keep the patient well long-term, and has no physical side effects. If we see a patient with an anxiety disorder who is taking propranolol, we should sensitively explain that it is not effective, and offer them an SSRI instead if they would like medication. If the child/adolescent has recovered from the anxiety by the time we see them, it is still worth stopping the propranolol as they may no longer need it. It is also important to educate the GP that propranolol is neither safe nor effective for anxiety disorders. One possible exception is children/adolescents with autism spectrum disorder (ASD) and anxiety. Some ASD experts report that children with ASD are much more sensitive to internal somatic stimuli, and if they have pronounced physical symptoms of anxiety, reducing these with propranolol may make a big difference and reduce overall anxiety. However, there is no clear evidence for this, and it is probably more appropriate to try an SSRI first. Can I use SSRI medication to manage repetitive behaviours in a child with a diagnosis of ASD? Repetitive behaviours are a core diagnostic component of ASD and, consequently, commonly observed among individuals with autism of all ages. These behaviours can take different forms: broadly speaking, while for some they comprise ‘functional’ behaviours, such as the pursuit of interests, for others they may be less purposeful, such as ritualistic and repetitive motor behaviours. The decision to treat is fundamentally determined by the degree to which the behaviour impacts on wellbeing and/or the ability to develop other skills. Treatment of repetitive behaviours will largely be from a psychological/behavioural perspective. In some circumstances, however, these may be ineffective, or otherwise difficult to implement; in such cases, medication may be considered. The Cochrane Review of 2013, which considered the use of SSRIs in the management of ‘core’ symptoms of ASD, found no evidence to support their use for ritualistic behaviours . A more recent systematic review similarly concluded that there is no evidence for the use of SSRI or other medications in the management of ritualistic behaviours . This notwithstanding, when examining individual studies, there is certainly some evidence to suggest that certain SSRIs , guanfacine , risperidone- a D2, 5-HT2 and NE alpha-2 receptor antagonist and buspirone – a 5-HT1A receptor partial agonist may be effective. Given these data, it may be reasonable to consider an SSRI when other non-pharmacological treatment options have failed given that these are used widely in clinical practice and so safety data are available, although some ASD clinical trials have raised the higher risk of side effects in this population (see ; and references to individual studies therein). If SSRI are ineffective, or poorly tolerated, risperidone, guanfacine or buspirone may be considered as an alternative; however, particular caution is advised in relation to their use, which must be under specialist supervision. At the outset it is essential that any trial of medication has (i) a clear titration schedule and (ii) well-defined outcome measures. It is important to bear in mind that medication may take several months to alleviate symptoms. In accordance with the Stopping The Over-Medication of children and young People (STOMP with a learning disability, autism or both) and Supporting Treatment and Appropriate Medication in Paediatrics (STAMP) frameworks ( https://www.england.nhs.uk/wp-content/uploads/2019/02/STOMP-STAMP-principles.pdf ), medication should be discontinued if it does not help, and, among responders, should be monitored and discontinued when appropriate. What are my options in the management of aggression in a child with ASD? Aggression is one of the most frequent reasons for referral to specialist services for a child with ASD, particularly among those children with ASD who have co-morbid intellectual disability. From the outset, a multidisciplinary approach is crucial in any such assessment to identify the relevant underlying factors so these can be managed . Aggression is not a diagnosis; instead, it is a mechanism for the communication of need, discomfort or distress. The clinical team must identify the relevant biological, psychological and social predisposing, precipitating and perpetuating factors so that these can be managed accordingly. In rare instances, major life events, or factors such as constipation or pain, are key, but more commonly multiple factors are relevant. Aggression may also be a symptom of an underlying psychiatric diagnosis such as depression or psychosis, which should be managed accordingly. Despite the recommendations made below, it is important to bear in mind that psychotropic medication can itself result in agitation, anxiety and mood disturbance, and so careful attention needs to be paid to the possibility that symptoms may be getting worse as a result of medication. In the management of aggression, medication may be deemed an appropriate option in a number of different situations. For example, in the acute situation where it would be difficult or potentially unsafe to introduce behavioural therapy, medication can offer the possibility of reducing the severity of symptoms and thereby improving engagement in psychological interventions. Either risperidone (1–2 mg) or aripiprazole – a D2 and 5-HT1A receptor partial agonist (5–10 mg) can be used for this purpose, both having shown efficacy and safety among autistic individuals at lower doses compared to when they are used as antipsychotics (of note, at these lower doses, serotonergic antagonism, rather than dopamine antagonism, predominates). Both have FDA approval for agitation in ASD. If used for this purpose, it is important that it is discontinued once behavioural work is underway. Less is known about the acute (and maintenance) use of other (so-called) antipsychotics . Similarly, benzodiazepines – positive allosteric modulators of the GABA-A receptor-are sometimes used but there are no data. In other situations, the consideration of medication may be raised when behavioural therapy has been unsuccessful or only partially successful. It is always important to consider the reasons for this lack of success, including revisiting the possibility of underlying medical causes (pain, constipation, metabolic, allergic and so forth). As in the acute situation, risperidone or aripiprazole may be introduced, with the expectation that the child is likely to remain on this over a period of 6 months or more. It is important that clinical monitoring takes place according to local policy, which in the United Kingdom may vary between NHS Trusts . The FDA recommends monitoring according to known risk factors (e.g. cardiovascular risk factors) and emerging unwanted symptoms such as polydipsia and polyuria. The NICE Clinical Knowledge summary ‘What monitoring is required?’ ( https://cks.nice.org.uk/topics/psychosis-schizophrenia/prescribing-information/monitoring/ ) in relation to risperidone use in children advocates monitoring for hyperprolactinaemia 6 months after starting treatment, and then every 12 months. Collection of blood may not always be possible, in which case the decision to continue medication must be made in discussion with the patient and family and clearly documented. Other (so-called) antipsychotics may also be used second line, but less is known about their use in this population. Similarly, benzodiazepines can have both a calming and anxiolytic effect, but there are no data on their use in this population. There is no evidence that SSRIs are effective for aggression, and no evidence for the use of mood stabilising treatments (unless of course a specific relevant diagnosis of bipolar disorder (BD) has been made). There is also no evidence for using different medications in combination. Deciding on when to discontinue medication is also important, and a decision must be taken early in treatment. It may be reasonable to begin discontinuation at the 6-month point, or sooner for some. However, in reality other factors will influence this decision, such as whether or not the child has structure during the day (less likely during school holidays) and whether there is additional support available. What are my pharmacological choices for the management of aggression in a child with ASD when standard options seem to have failed? For some children, behavioural interventions combined with ‘first line’ medications such as those suggested above fail to alleviate symptoms. There may be several reasons for this, and it is crucial at the outset that these are explored. The diagnostic formulation may have missed an important aetiological factor, or previously identified triggers or perpetuating factors may have been inadequately managed. For example, an environment with certain characteristics, such as routine, predictability and appropriate level of stimulation, may not be made available. Or there may be an absence of adequately trained support staff and staff may fail to follow behavioural recommendations. In these circumstances it is crucial that medication does not become the perceived ‘solution’. It is unethical for the psychiatrist to be expected to medicate where services are unable to meet needs. On the other hand, there will be situations where challenging behaviour continues despite attention to all identified aetiological factors. Such situations are most likely to arise in the context of non-verbal autistic children who are severely or profoundly cognitively impaired. While it remains important to continue to work towards a great understanding of the behaviour’s underpinning, this is often very challenging and medication may thus be considered as a least restrictive, best interest option in consultation with decision makers. Additionally, the psychiatrist should discuss treatment-resistant cases with their peer group, and consider seeking a second option. Other medications than those already discussed above have much less evidence for efficacy. Other (so-called) antipsychotics such as quetiapine – receptor antagonist (D2 and 5-HT2) or olanzapine – D2 and 5-HT2 receptor antagonist – are certainly reasonable options, although they have not been studied robustly in this population . Consequently, these medications must be used with caution, including low initial dose, slow titration and regular monitoring. Alternatively, benzodiazepines such as lorazepam, diazepam or clonazepam may be considered, but the same caution is advised. The combination of an antipsychotic with a benzodiazepine may also be considered, but the aim must always be to achieve symptomatic relief without sedation, such that the child can continue to engage in activities. If a benzodiazepine is prescribed, it must always be only used in the short term. SSRIs do not have a role to play in the management of aggression, and their use should be limited to the management of mood disorders, anxiety and, in some circumstances, repetitive and ritualistic behaviours as discussed above. Clozapine-D2, 5-HT2 and NE alpha-2 receptor antagonist are sometimes used to manage aggressive behaviour in autistic people, but this is based on limited evidence in adults . There are no data available for its use for this purpose in the paediatric population. There is no evidence for other agents, such as mood stabilisers or GABAergic agents (pregabalin). Repetitive behaviours are a core diagnostic component of ASD and, consequently, commonly observed among individuals with autism of all ages. These behaviours can take different forms: broadly speaking, while for some they comprise ‘functional’ behaviours, such as the pursuit of interests, for others they may be less purposeful, such as ritualistic and repetitive motor behaviours. The decision to treat is fundamentally determined by the degree to which the behaviour impacts on wellbeing and/or the ability to develop other skills. Treatment of repetitive behaviours will largely be from a psychological/behavioural perspective. In some circumstances, however, these may be ineffective, or otherwise difficult to implement; in such cases, medication may be considered. The Cochrane Review of 2013, which considered the use of SSRIs in the management of ‘core’ symptoms of ASD, found no evidence to support their use for ritualistic behaviours . A more recent systematic review similarly concluded that there is no evidence for the use of SSRI or other medications in the management of ritualistic behaviours . This notwithstanding, when examining individual studies, there is certainly some evidence to suggest that certain SSRIs , guanfacine , risperidone- a D2, 5-HT2 and NE alpha-2 receptor antagonist and buspirone – a 5-HT1A receptor partial agonist may be effective. Given these data, it may be reasonable to consider an SSRI when other non-pharmacological treatment options have failed given that these are used widely in clinical practice and so safety data are available, although some ASD clinical trials have raised the higher risk of side effects in this population (see ; and references to individual studies therein). If SSRI are ineffective, or poorly tolerated, risperidone, guanfacine or buspirone may be considered as an alternative; however, particular caution is advised in relation to their use, which must be under specialist supervision. At the outset it is essential that any trial of medication has (i) a clear titration schedule and (ii) well-defined outcome measures. It is important to bear in mind that medication may take several months to alleviate symptoms. In accordance with the Stopping The Over-Medication of children and young People (STOMP with a learning disability, autism or both) and Supporting Treatment and Appropriate Medication in Paediatrics (STAMP) frameworks ( https://www.england.nhs.uk/wp-content/uploads/2019/02/STOMP-STAMP-principles.pdf ), medication should be discontinued if it does not help, and, among responders, should be monitored and discontinued when appropriate. Aggression is one of the most frequent reasons for referral to specialist services for a child with ASD, particularly among those children with ASD who have co-morbid intellectual disability. From the outset, a multidisciplinary approach is crucial in any such assessment to identify the relevant underlying factors so these can be managed . Aggression is not a diagnosis; instead, it is a mechanism for the communication of need, discomfort or distress. The clinical team must identify the relevant biological, psychological and social predisposing, precipitating and perpetuating factors so that these can be managed accordingly. In rare instances, major life events, or factors such as constipation or pain, are key, but more commonly multiple factors are relevant. Aggression may also be a symptom of an underlying psychiatric diagnosis such as depression or psychosis, which should be managed accordingly. Despite the recommendations made below, it is important to bear in mind that psychotropic medication can itself result in agitation, anxiety and mood disturbance, and so careful attention needs to be paid to the possibility that symptoms may be getting worse as a result of medication. In the management of aggression, medication may be deemed an appropriate option in a number of different situations. For example, in the acute situation where it would be difficult or potentially unsafe to introduce behavioural therapy, medication can offer the possibility of reducing the severity of symptoms and thereby improving engagement in psychological interventions. Either risperidone (1–2 mg) or aripiprazole – a D2 and 5-HT1A receptor partial agonist (5–10 mg) can be used for this purpose, both having shown efficacy and safety among autistic individuals at lower doses compared to when they are used as antipsychotics (of note, at these lower doses, serotonergic antagonism, rather than dopamine antagonism, predominates). Both have FDA approval for agitation in ASD. If used for this purpose, it is important that it is discontinued once behavioural work is underway. Less is known about the acute (and maintenance) use of other (so-called) antipsychotics . Similarly, benzodiazepines – positive allosteric modulators of the GABA-A receptor-are sometimes used but there are no data. In other situations, the consideration of medication may be raised when behavioural therapy has been unsuccessful or only partially successful. It is always important to consider the reasons for this lack of success, including revisiting the possibility of underlying medical causes (pain, constipation, metabolic, allergic and so forth). As in the acute situation, risperidone or aripiprazole may be introduced, with the expectation that the child is likely to remain on this over a period of 6 months or more. It is important that clinical monitoring takes place according to local policy, which in the United Kingdom may vary between NHS Trusts . The FDA recommends monitoring according to known risk factors (e.g. cardiovascular risk factors) and emerging unwanted symptoms such as polydipsia and polyuria. The NICE Clinical Knowledge summary ‘What monitoring is required?’ ( https://cks.nice.org.uk/topics/psychosis-schizophrenia/prescribing-information/monitoring/ ) in relation to risperidone use in children advocates monitoring for hyperprolactinaemia 6 months after starting treatment, and then every 12 months. Collection of blood may not always be possible, in which case the decision to continue medication must be made in discussion with the patient and family and clearly documented. Other (so-called) antipsychotics may also be used second line, but less is known about their use in this population. Similarly, benzodiazepines can have both a calming and anxiolytic effect, but there are no data on their use in this population. There is no evidence that SSRIs are effective for aggression, and no evidence for the use of mood stabilising treatments (unless of course a specific relevant diagnosis of bipolar disorder (BD) has been made). There is also no evidence for using different medications in combination. Deciding on when to discontinue medication is also important, and a decision must be taken early in treatment. It may be reasonable to begin discontinuation at the 6-month point, or sooner for some. However, in reality other factors will influence this decision, such as whether or not the child has structure during the day (less likely during school holidays) and whether there is additional support available. For some children, behavioural interventions combined with ‘first line’ medications such as those suggested above fail to alleviate symptoms. There may be several reasons for this, and it is crucial at the outset that these are explored. The diagnostic formulation may have missed an important aetiological factor, or previously identified triggers or perpetuating factors may have been inadequately managed. For example, an environment with certain characteristics, such as routine, predictability and appropriate level of stimulation, may not be made available. Or there may be an absence of adequately trained support staff and staff may fail to follow behavioural recommendations. In these circumstances it is crucial that medication does not become the perceived ‘solution’. It is unethical for the psychiatrist to be expected to medicate where services are unable to meet needs. On the other hand, there will be situations where challenging behaviour continues despite attention to all identified aetiological factors. Such situations are most likely to arise in the context of non-verbal autistic children who are severely or profoundly cognitively impaired. While it remains important to continue to work towards a great understanding of the behaviour’s underpinning, this is often very challenging and medication may thus be considered as a least restrictive, best interest option in consultation with decision makers. Additionally, the psychiatrist should discuss treatment-resistant cases with their peer group, and consider seeking a second option. Other medications than those already discussed above have much less evidence for efficacy. Other (so-called) antipsychotics such as quetiapine – receptor antagonist (D2 and 5-HT2) or olanzapine – D2 and 5-HT2 receptor antagonist – are certainly reasonable options, although they have not been studied robustly in this population . Consequently, these medications must be used with caution, including low initial dose, slow titration and regular monitoring. Alternatively, benzodiazepines such as lorazepam, diazepam or clonazepam may be considered, but the same caution is advised. The combination of an antipsychotic with a benzodiazepine may also be considered, but the aim must always be to achieve symptomatic relief without sedation, such that the child can continue to engage in activities. If a benzodiazepine is prescribed, it must always be only used in the short term. SSRIs do not have a role to play in the management of aggression, and their use should be limited to the management of mood disorders, anxiety and, in some circumstances, repetitive and ritualistic behaviours as discussed above. Clozapine-D2, 5-HT2 and NE alpha-2 receptor antagonist are sometimes used to manage aggressive behaviour in autistic people, but this is based on limited evidence in adults . There are no data available for its use for this purpose in the paediatric population. There is no evidence for other agents, such as mood stabilisers or GABAergic agents (pregabalin). How does one distinguish ADHD from Bipolar Disorder (BD)? As a neurodevelopmental disorder, the features of ADHD are usually present early in childhood and often cause functional impairment by the time a child is in primary school, whereas BD usually presents in teenage years with peak onset described as between 15 and 19 years . Another factor that helps distinguish ADHD from BD is the course of symptoms as in ADHD the symptoms are pervasive and generally chronic over time (with some variation in severity); BD presents with an episodic course of symptoms (in children and adolescents these episodes may have shorter durations than those seen in adults) with periods of euthymia interspersed. Lastly, ADHD is a disorder of attention, motor activity and impulsivity whereas BD is a disorder of mood. Of course, these disorders may occur co-morbidly as well, and if this is the case appropriate management strategies should be employed. How does one distinguish Borderline Personality Disorder (BPD) from BD? BD is a disorder of mood characterised by discrete mood episodes. Periods between the mood episodes also described as euthymia are usually characterised by stable psychosocial functioning. Borderline personality disorder (BPD), also known as emotionally unstable personality disorder (EUPD), is a personality disorder characterised by a long-term pattern of unstable interpersonal relationships, distorted sense of self and marked affective instability. The affective instability is often triggered by environmental factors and usually lasts hours to days, unlike the mood episodes in BD which last weeks to months. Other clinical variables that help distinguish BD from BPD include: response to pharmacology in BD, a family history of BD, and the fact that BPD is often associated with significant early life trauma. When in developing individualised treatment approach, adopting a dimensional approach in addition to a categorical approach, is beneficial particularly when these conditions are co-morbid. How does one distinguish treatment-emergent affective switch (TEAS) from behavioural activation when using antidepressants and what is the management? Both phenomena can occur quite soon after the initiation of antidepressants. The International Society for Bipolar Disorders Task Force recommends using the term ‘treatment-emergent affective switch’ (TEAS) instead of antidepressant-induced switch to emphasise association of (hypo-)manic symptoms when treating with antidepressants without implying causality. Different researchers operationalise the time frame in which TEAS can emerge with using 8 weeks while others such as define this as 12 weeks. However, the task force also states that if symptoms occur within 2 weeks of initiating antidepressants, then to term it as antidepressant-associated TEAS. In the activation syndrome, symptoms can include impulsivity, insomnia, restlessness, hyperactivity and irritability, while on the other hand antidepressant-induced manic switch presents with symptoms of a (hypo-)manic episode. Mild hypomanic symptoms are much more common than manic switching and it may not always be easy to differentiate a hypomanic switch from behavioural activation. Indeed, some scholars suggest that behavioural activation might represent subsyndromal manic symptoms or unrecognised BD, especially in young persons who have not been diagnosed . However, other researchers highlight that manic spectrum symptoms involve a change in mood and behaviour with coexisting symptoms of grandiosity and euphoria which are not found in the SSRI-induced activation syndrome . The treatment for a manic switch includes immediate discontinuation of the antidepressant (and possible use of antimanic agents) whereas a reduction of or discontinuation of antidepressants benefits the activation syndrome. As a neurodevelopmental disorder, the features of ADHD are usually present early in childhood and often cause functional impairment by the time a child is in primary school, whereas BD usually presents in teenage years with peak onset described as between 15 and 19 years . Another factor that helps distinguish ADHD from BD is the course of symptoms as in ADHD the symptoms are pervasive and generally chronic over time (with some variation in severity); BD presents with an episodic course of symptoms (in children and adolescents these episodes may have shorter durations than those seen in adults) with periods of euthymia interspersed. Lastly, ADHD is a disorder of attention, motor activity and impulsivity whereas BD is a disorder of mood. Of course, these disorders may occur co-morbidly as well, and if this is the case appropriate management strategies should be employed. BD is a disorder of mood characterised by discrete mood episodes. Periods between the mood episodes also described as euthymia are usually characterised by stable psychosocial functioning. Borderline personality disorder (BPD), also known as emotionally unstable personality disorder (EUPD), is a personality disorder characterised by a long-term pattern of unstable interpersonal relationships, distorted sense of self and marked affective instability. The affective instability is often triggered by environmental factors and usually lasts hours to days, unlike the mood episodes in BD which last weeks to months. Other clinical variables that help distinguish BD from BPD include: response to pharmacology in BD, a family history of BD, and the fact that BPD is often associated with significant early life trauma. When in developing individualised treatment approach, adopting a dimensional approach in addition to a categorical approach, is beneficial particularly when these conditions are co-morbid. Both phenomena can occur quite soon after the initiation of antidepressants. The International Society for Bipolar Disorders Task Force recommends using the term ‘treatment-emergent affective switch’ (TEAS) instead of antidepressant-induced switch to emphasise association of (hypo-)manic symptoms when treating with antidepressants without implying causality. Different researchers operationalise the time frame in which TEAS can emerge with using 8 weeks while others such as define this as 12 weeks. However, the task force also states that if symptoms occur within 2 weeks of initiating antidepressants, then to term it as antidepressant-associated TEAS. In the activation syndrome, symptoms can include impulsivity, insomnia, restlessness, hyperactivity and irritability, while on the other hand antidepressant-induced manic switch presents with symptoms of a (hypo-)manic episode. Mild hypomanic symptoms are much more common than manic switching and it may not always be easy to differentiate a hypomanic switch from behavioural activation. Indeed, some scholars suggest that behavioural activation might represent subsyndromal manic symptoms or unrecognised BD, especially in young persons who have not been diagnosed . However, other researchers highlight that manic spectrum symptoms involve a change in mood and behaviour with coexisting symptoms of grandiosity and euphoria which are not found in the SSRI-induced activation syndrome . The treatment for a manic switch includes immediate discontinuation of the antidepressant (and possible use of antimanic agents) whereas a reduction of or discontinuation of antidepressants benefits the activation syndrome. What additional considerations are needed when prescribing in young people with eating disorders? There are considerations that relate to the physical complications of eating disorders. These include cardiac risks secondary to malnutrition and electrolyte abnormalities secondary to malnutrition or purging. Low weight can cause bradycardia and prolong QTc interval, thus increasing the risk of arrythmia , as can some psychotropic medications, in particular haloperidol, a D2 receptor antagonist. Studies suggest olanzapine has little effect on QTc interval but the UK manufacturer advises caution on the basis that other antipsychotics have QT prolonging effects. Purging causes potassium depletion, and low potassium can also arise secondary to severe weight loss. Hypokalaemia can further prolong QTc interval as well as causing arrythmia in its own right. It is therefore good practice to perform an ECG before prescribing. Another consideration is patient engagement and adherence with prescribing. Anxiety about, and sensitivity to, side effects need to be addressed directly before initiating medication, in particular fear of weight gain secondary to antipsychotics. Patients should be reassured that the small doses of antipsychotics such as olanzapine typically used will have a minimal impact on appetite, that use of antipsychotics is short term (months rather than years) and that weight can be managed by adherence to meal plans during therapy. The primary purpose of prescribing antipsychotic medication in patients with anorexia nervosa is to reduce overwhelming arousal during meal times and not directly to cause weight gain. Should comorbidities in young people with eating disorders be treated separately? Comorbidities in eating disorders are common – up to 55% of children and adolescents presenting with an eating disorder have a comorbid disorder at presentation . The commonest are depression, anxiety (especially social anxiety), obsessive-compulsive disorder (OCD) and neurodevelopmental disorders such as ASD and ADHD. Three important considerations are warranted when considering whether to treat the comorbidity. First, eating disorders can mimic or accentuate comorbidities, which are corrected as symptoms resolve. An example is the quasi-autistic effects that occurs in starvation syndrome, such as difficulties in set shifting and enhanced central coherence . Second, effective psychological treatments for eating disorders can result in improvements in other areas. For example, family-based treatment for anorexia nervosa, once malnutrition has been corrected, involves supporting dialogue around adolescent challenges and anxieties, such as difficulties in school or in interpersonal relationships, in a supportive family context. Clinical trials show resolution in anxiety and depression as well as the eating disorder at the end of treatment for the majority of subjects . Finally, studies suggest that some pharmacological interventions are ineffective during the acute phase. For example, antidepressants show limited efficacy in the context of malnutrition and it has been suggested that this decrease/loss of antidepressant efficacy is due to starvation-related structural and biochemical/pharmacologic changes in the brain . However, in some patients, the history clearly suggests that untreated or undiagnosed mental disorder may have preceded onset of the eating disorder and even that the eating disorder functioned as a ‘solution’ to these feelings of anxiety or other negative affect. For example, OCD may remerge as anorexia nervosa recedes. If a patient is not responding to competently delivered treatment as expected, re-evaluation of the diagnosis and formulation and consideration of additional treatment strategies for comorbidity should be considered. How might treatment approaches differ in young people with an eating disorder in the context of neurodevelopmental disorder? Neurodevelopmental disorders are common across mental disorders, and are especially associated with eating disorders. Studies report between 4 and 52.5% of participants with anorexia nervosa meet suggested clinical cut-off for ASD, with higher proportions in adult that in younger populations . This wide range reflects the heterogeneity in the diagnostic assessment across studies. ASD is also commonly comorbid with avoidant restrictive food intake disorder. Treatment adaptations for comorbid ASD are in development (e.g. the PEACE pathway; ). The possibility of comorbid ASD is an important consideration when prescribing in terms of anticipated response, treatment duration and side effect sensitivity. In general, people with ASD are more likely to remain on medication and to be prescribed polypharmacy. What research there is, suggests autistic people may be more likely to experience side effects such as drowsiness, irritability and reduced activity . ADHD is commonly comorbid in the context of bulimic and binge eating disorder (BED) . Identification, assessment and treatment of comorbid ADHD may be helpful in addressing difficulties in impulsivity and self-regulation common to both disorders. The potential appetite suppressing effects of ADHD treatment should also be a consideration in evaluating the treatment options. Trials of lisdexamfetamine show favourable results compared with placebo on remission, change in body mass index (BMI) and binge eating in adults with BED . However, in trials, more people withdrew due to adverse events in the active arm, and there was a trend towards higher depression scores in the lisdexamphetamine arm compared with placebo. Lisdexamfetamine is not licenced in the United Kingdom for treatment of BED, but is licensed for treatment of ADHD. No pharmacological trials in young people with BED have been conducted as yet. There are considerations that relate to the physical complications of eating disorders. These include cardiac risks secondary to malnutrition and electrolyte abnormalities secondary to malnutrition or purging. Low weight can cause bradycardia and prolong QTc interval, thus increasing the risk of arrythmia , as can some psychotropic medications, in particular haloperidol, a D2 receptor antagonist. Studies suggest olanzapine has little effect on QTc interval but the UK manufacturer advises caution on the basis that other antipsychotics have QT prolonging effects. Purging causes potassium depletion, and low potassium can also arise secondary to severe weight loss. Hypokalaemia can further prolong QTc interval as well as causing arrythmia in its own right. It is therefore good practice to perform an ECG before prescribing. Another consideration is patient engagement and adherence with prescribing. Anxiety about, and sensitivity to, side effects need to be addressed directly before initiating medication, in particular fear of weight gain secondary to antipsychotics. Patients should be reassured that the small doses of antipsychotics such as olanzapine typically used will have a minimal impact on appetite, that use of antipsychotics is short term (months rather than years) and that weight can be managed by adherence to meal plans during therapy. The primary purpose of prescribing antipsychotic medication in patients with anorexia nervosa is to reduce overwhelming arousal during meal times and not directly to cause weight gain. Comorbidities in eating disorders are common – up to 55% of children and adolescents presenting with an eating disorder have a comorbid disorder at presentation . The commonest are depression, anxiety (especially social anxiety), obsessive-compulsive disorder (OCD) and neurodevelopmental disorders such as ASD and ADHD. Three important considerations are warranted when considering whether to treat the comorbidity. First, eating disorders can mimic or accentuate comorbidities, which are corrected as symptoms resolve. An example is the quasi-autistic effects that occurs in starvation syndrome, such as difficulties in set shifting and enhanced central coherence . Second, effective psychological treatments for eating disorders can result in improvements in other areas. For example, family-based treatment for anorexia nervosa, once malnutrition has been corrected, involves supporting dialogue around adolescent challenges and anxieties, such as difficulties in school or in interpersonal relationships, in a supportive family context. Clinical trials show resolution in anxiety and depression as well as the eating disorder at the end of treatment for the majority of subjects . Finally, studies suggest that some pharmacological interventions are ineffective during the acute phase. For example, antidepressants show limited efficacy in the context of malnutrition and it has been suggested that this decrease/loss of antidepressant efficacy is due to starvation-related structural and biochemical/pharmacologic changes in the brain . However, in some patients, the history clearly suggests that untreated or undiagnosed mental disorder may have preceded onset of the eating disorder and even that the eating disorder functioned as a ‘solution’ to these feelings of anxiety or other negative affect. For example, OCD may remerge as anorexia nervosa recedes. If a patient is not responding to competently delivered treatment as expected, re-evaluation of the diagnosis and formulation and consideration of additional treatment strategies for comorbidity should be considered. Neurodevelopmental disorders are common across mental disorders, and are especially associated with eating disorders. Studies report between 4 and 52.5% of participants with anorexia nervosa meet suggested clinical cut-off for ASD, with higher proportions in adult that in younger populations . This wide range reflects the heterogeneity in the diagnostic assessment across studies. ASD is also commonly comorbid with avoidant restrictive food intake disorder. Treatment adaptations for comorbid ASD are in development (e.g. the PEACE pathway; ). The possibility of comorbid ASD is an important consideration when prescribing in terms of anticipated response, treatment duration and side effect sensitivity. In general, people with ASD are more likely to remain on medication and to be prescribed polypharmacy. What research there is, suggests autistic people may be more likely to experience side effects such as drowsiness, irritability and reduced activity . ADHD is commonly comorbid in the context of bulimic and binge eating disorder (BED) . Identification, assessment and treatment of comorbid ADHD may be helpful in addressing difficulties in impulsivity and self-regulation common to both disorders. The potential appetite suppressing effects of ADHD treatment should also be a consideration in evaluating the treatment options. Trials of lisdexamfetamine show favourable results compared with placebo on remission, change in body mass index (BMI) and binge eating in adults with BED . However, in trials, more people withdrew due to adverse events in the active arm, and there was a trend towards higher depression scores in the lisdexamphetamine arm compared with placebo. Lisdexamfetamine is not licenced in the United Kingdom for treatment of BED, but is licensed for treatment of ADHD. No pharmacological trials in young people with BED have been conducted as yet. What is the role of cannabidiol in the child with epilepsy and behavioural problems? There are now sufficient high-quality studies to confirm the efficacy of cannabidiol in treating seizures in both adults and children . The mode of action of cannabidiol as an antiseizure medication remains uncertain. The efficacy and adverse effects are not very different from those of other relatively new antiseizure medications. Overall, it is not notably more effective or less effective. With regard to either beneficial or adverse behavioural effects, there is currently conflicting evidence, although with some suggestion of benefit . A confounding factor is that seizure frequency can affect behaviour, implying that if the cannabidiol affects seizure frequency, it may be difficult to determine whether any change in behaviour is the result of a change in seizures or whether it is a direct result of the cannabidiol itself. There are also some important drug interactions; for example, cannabidiol raises clobazam blood levels. Further evidence will be required before definitive information on the effects of cannabidiol on behaviour in young people with epilepsy can be provided. If a child is having nocturnal episodes that are disturbing sleep, when should we be assessing for epilepsy as a possible cause? In the absence of a history of daytime seizures, the likelihood of undiagnosed nocturnal seizures is less. However, there are certain types of epilepsy that present with seizures that occur only or primarily at night. In particular, nocturnal seizures of frontal lobe origin can present in a way that can be inconsistent with what most people would consider as being typical of epileptic seizures . Nocturnal frontal lobe seizures typically present with brief episodes that can occur multiple times in a single night, often with non-rhythmical movements. They are frequently accompanied by vocalisation and, in contrast to the situation for most other focal-onset seizures, it is often possible for the individual to recall the seizures afterwards. The recollection can be disturbing, sometimes being associated with a subjective difficulty in breathing, which could be misdiagnosed as asthma. A careful history will usually distinguish nocturnal frontal lobe seizures from other nocturnal episodes. For example, the characteristics of nocturnal frontal lobe seizures can be distinguished clearly from night terrors, through careful history taking. A video of the nocturnal seizures, perhaps using an infrared camera, can assist in the diagnostic process. Nocturnal frontal lobe seizures classically respond to treatment with carbamazepine; they can also respond to other antiseizure medications, such as levetiracetam. Children with nocturnal tonic (stiffening) seizures generally have a background history of epilepsy. Because they are brief, nocturnal tonic seizures may be difficult to distinguish from normal nocturnal movements but video EEG monitoring can be diagnostic. Brief tonic seizures can occur many times in a single night, sometimes more than 100 times. Such frequent seizures almost certainly affect daytime alertness and performance. The other nocturnal epilepsy-related phenomenon that is of major importance, although rare, is electrical status epilepticus of slow-wave sleep (ESES), otherwise known as continuous spike-wave of slow-wave sleep . The definition of ESES requires that more than 85%of the slow-wave sleep be replaced by spike-wave epileptiform discharges. Although most cases present against the background of established epilepsy, some cases present with no history of overt seizures. ESES can be accompanied by profound loss of skills, especially profound loss of verbal auditory comprehension, which can lead to complete loss of speech, in some cases. ESES accompanied by loss of speech is known as the Landau–Kleffner syndrome or acquired epileptic aphasia. Several medication treatments have been advocated, with some degree of success. If antiseizure medication is not effective, , then the surgical procedure of multiple subpial transection should be considered. Although reports of the efficacy of this procedure are variable, it can be highly successful in allowing the aphasic child to speak again. Any child who loses skills for no apparent reason deserves thorough investigation, including consideration of EEG monitoring during sleep. If this reveals ESES, then prompt treatment should follow. Children with acquired aphasia, understandably, may have major behavioural difficulties, including autistic features, emphasising the importance of effective, prompt and appropriate management . If I have prescribed fluoxetine to a teenager with depression but no history of epilepsy and he or she has a seizure, what should I do? Both animal work and the landmark paper by analysing data on people with depression, strongly suggest that SSRIs are protective against seizures rather than precipitating seizures. A cursory examination of the data could result in misleading, incorrect conclusions. Depression itself is a risk factor for having seizures. People taking SSRIs for depression are more at risk of having seizures than the general population, which might lead to the incorrect conclusion that the SSRIs have increased the risk of having seizures. This is because the comparison is the wrong one. If a group of people with depression taking SSRIs is matched with a group of people with depression not taking SSRIs, then those taking SSRIs have a very much lower risk of having seizures, as confirmed by the Alper et al. review. In brief, if someone taking SSRIs for depression has a seizure, the situation should be discussed fully with the young person/family. It is unlikely that the SSRI will have precipitated the seizure and it is even possible that the SSRI might have some protective benefit against further seizures. The family should be provided with accurate information on which to make a decision. Usually, the recommendation will be to continue with the SSRI, unless there is a clear indication for not doing so. If it is a single seizure with no associated clinical signs or symptoms that might cause concern, such as, for example, a new abnormality on neurological examination, it is usually not necessary to refer to paediatrician or neurologist. If further seizures occur, full assessment for epilepsy is recommended. There are now sufficient high-quality studies to confirm the efficacy of cannabidiol in treating seizures in both adults and children . The mode of action of cannabidiol as an antiseizure medication remains uncertain. The efficacy and adverse effects are not very different from those of other relatively new antiseizure medications. Overall, it is not notably more effective or less effective. With regard to either beneficial or adverse behavioural effects, there is currently conflicting evidence, although with some suggestion of benefit . A confounding factor is that seizure frequency can affect behaviour, implying that if the cannabidiol affects seizure frequency, it may be difficult to determine whether any change in behaviour is the result of a change in seizures or whether it is a direct result of the cannabidiol itself. There are also some important drug interactions; for example, cannabidiol raises clobazam blood levels. Further evidence will be required before definitive information on the effects of cannabidiol on behaviour in young people with epilepsy can be provided. In the absence of a history of daytime seizures, the likelihood of undiagnosed nocturnal seizures is less. However, there are certain types of epilepsy that present with seizures that occur only or primarily at night. In particular, nocturnal seizures of frontal lobe origin can present in a way that can be inconsistent with what most people would consider as being typical of epileptic seizures . Nocturnal frontal lobe seizures typically present with brief episodes that can occur multiple times in a single night, often with non-rhythmical movements. They are frequently accompanied by vocalisation and, in contrast to the situation for most other focal-onset seizures, it is often possible for the individual to recall the seizures afterwards. The recollection can be disturbing, sometimes being associated with a subjective difficulty in breathing, which could be misdiagnosed as asthma. A careful history will usually distinguish nocturnal frontal lobe seizures from other nocturnal episodes. For example, the characteristics of nocturnal frontal lobe seizures can be distinguished clearly from night terrors, through careful history taking. A video of the nocturnal seizures, perhaps using an infrared camera, can assist in the diagnostic process. Nocturnal frontal lobe seizures classically respond to treatment with carbamazepine; they can also respond to other antiseizure medications, such as levetiracetam. Children with nocturnal tonic (stiffening) seizures generally have a background history of epilepsy. Because they are brief, nocturnal tonic seizures may be difficult to distinguish from normal nocturnal movements but video EEG monitoring can be diagnostic. Brief tonic seizures can occur many times in a single night, sometimes more than 100 times. Such frequent seizures almost certainly affect daytime alertness and performance. The other nocturnal epilepsy-related phenomenon that is of major importance, although rare, is electrical status epilepticus of slow-wave sleep (ESES), otherwise known as continuous spike-wave of slow-wave sleep . The definition of ESES requires that more than 85%of the slow-wave sleep be replaced by spike-wave epileptiform discharges. Although most cases present against the background of established epilepsy, some cases present with no history of overt seizures. ESES can be accompanied by profound loss of skills, especially profound loss of verbal auditory comprehension, which can lead to complete loss of speech, in some cases. ESES accompanied by loss of speech is known as the Landau–Kleffner syndrome or acquired epileptic aphasia. Several medication treatments have been advocated, with some degree of success. If antiseizure medication is not effective, , then the surgical procedure of multiple subpial transection should be considered. Although reports of the efficacy of this procedure are variable, it can be highly successful in allowing the aphasic child to speak again. Any child who loses skills for no apparent reason deserves thorough investigation, including consideration of EEG monitoring during sleep. If this reveals ESES, then prompt treatment should follow. Children with acquired aphasia, understandably, may have major behavioural difficulties, including autistic features, emphasising the importance of effective, prompt and appropriate management . Both animal work and the landmark paper by analysing data on people with depression, strongly suggest that SSRIs are protective against seizures rather than precipitating seizures. A cursory examination of the data could result in misleading, incorrect conclusions. Depression itself is a risk factor for having seizures. People taking SSRIs for depression are more at risk of having seizures than the general population, which might lead to the incorrect conclusion that the SSRIs have increased the risk of having seizures. This is because the comparison is the wrong one. If a group of people with depression taking SSRIs is matched with a group of people with depression not taking SSRIs, then those taking SSRIs have a very much lower risk of having seizures, as confirmed by the Alper et al. review. In brief, if someone taking SSRIs for depression has a seizure, the situation should be discussed fully with the young person/family. It is unlikely that the SSRI will have precipitated the seizure and it is even possible that the SSRI might have some protective benefit against further seizures. The family should be provided with accurate information on which to make a decision. Usually, the recommendation will be to continue with the SSRI, unless there is a clear indication for not doing so. If it is a single seizure with no associated clinical signs or symptoms that might cause concern, such as, for example, a new abnormality on neurological examination, it is usually not necessary to refer to paediatrician or neurologist. If further seizures occur, full assessment for epilepsy is recommended. What steps do you take in medicating OCD in children and young people? Much of the information held in NICE guidance for the initiation of treatment of OCD are still based on strong evidence . When choosing to initiate medication, as part of stepped care, it is important to have a clear narrative for your patient and their family about the typical need to increase to a high dose of SSRI. Meta-analytic evidence from of OCD treatment trials largely emanates from adult research but still emphasise the need to use a moderate-to-high dose of medication . From clinical experience, most adolescents with OCD are able to tolerate weekly increments of medication, towards a high dose, for example, fluoxetine 60 mg daily. Understandably, young people with OCD are often very keen to see immediate effects of medication on their OCD symptoms. This is not realistic and a treatment trial for OCD should be around 3 months . It can be important to discuss the fact the SSRI may bring additional secondary benefits, such as improvements in mood and generalised anxiety, which are otherwise very common comorbidities. This can improve engagement and help longer term commitment to the OCD treatment. These gains for the young person’s mental state are also individually important. Ideally measuring response to treatment with a standardised measure can be helpful to track change. If there is little shift in the core OCD symptoms after 3 months at high dosage of an SSRI, cross-titrating to a second SSRI is recommended. A full 3-month trial, again at high dosing is important. It would be after a second high dose trial that one would consider a trial of clomipramine or, more commonly, to consider augmentation with low dose antipsychotic medication. What is the approach to the use of atypical antipsychotics in OCD? Patients who fail to show a good treatment response to SSRI medication may respond to the addition of a low dose antipsychotic medication to the SSRI therapy . It is important to note that the evidence for this intervention is purely as an augmenting agent and that OCD will not respond to treatment with atypical antipsychotics alone. The research supporting this emanates from studies in adult-age patients and of the OCDs, it is only OCD and not body dysmorphic disorder that have been investigated. In considering this intervention in children and young people, it is important that all the typical provisions and considerations be in place for prescribing and monitoring antipsychotics. It is also important to have a shared understanding with the young person and their family about the prospects for improvement. The data supports the use of low dose antipsychotic augmentation of an SSRI only. Approximately one-third of treatment-resistant cases of OCD will show an additional response to adding low dose antipsychotics to the SSRI . It is important when starting this intervention to adopt an open and transparent approach. Two-thirds of patients will not show an improvement. It is therefore important to set a time limit for this intervention of around 2–3 months and then discontinue the intervention if there is no response. It is ideal that the intervention is monitored with an appropriate OCD measure, such as the Children’s Obsessive Compulsive Inventory or the Children’s Yale-Brown Obsessive Compulsive Scale . Often clinicians can feel tempted to increase the dose of atypical antipsychotic, rather than being clear that if the patient is going to be in the minority that see this effect, it is likely to be with low dose atypical antipsychotic augmentation of the SSRI. It is often important to be clear to declare the trial of treatment as ineffective and to discontinue. When treatment for OCD have been suboptimal, children, young people and their parents often ask for additional switches of medication. This can be an important moment in clinical practice, to discuss efforts around psychological therapy. Where medications have not brought about the required improvements, young people and their families need to understand that multiple further switches of psychopharmacology are unlikely to be helpful. It can be in these moments that one can renegotiate to redouble efforts around CBT for OCD. Much of the information held in NICE guidance for the initiation of treatment of OCD are still based on strong evidence . When choosing to initiate medication, as part of stepped care, it is important to have a clear narrative for your patient and their family about the typical need to increase to a high dose of SSRI. Meta-analytic evidence from of OCD treatment trials largely emanates from adult research but still emphasise the need to use a moderate-to-high dose of medication . From clinical experience, most adolescents with OCD are able to tolerate weekly increments of medication, towards a high dose, for example, fluoxetine 60 mg daily. Understandably, young people with OCD are often very keen to see immediate effects of medication on their OCD symptoms. This is not realistic and a treatment trial for OCD should be around 3 months . It can be important to discuss the fact the SSRI may bring additional secondary benefits, such as improvements in mood and generalised anxiety, which are otherwise very common comorbidities. This can improve engagement and help longer term commitment to the OCD treatment. These gains for the young person’s mental state are also individually important. Ideally measuring response to treatment with a standardised measure can be helpful to track change. If there is little shift in the core OCD symptoms after 3 months at high dosage of an SSRI, cross-titrating to a second SSRI is recommended. A full 3-month trial, again at high dosing is important. It would be after a second high dose trial that one would consider a trial of clomipramine or, more commonly, to consider augmentation with low dose antipsychotic medication. Patients who fail to show a good treatment response to SSRI medication may respond to the addition of a low dose antipsychotic medication to the SSRI therapy . It is important to note that the evidence for this intervention is purely as an augmenting agent and that OCD will not respond to treatment with atypical antipsychotics alone. The research supporting this emanates from studies in adult-age patients and of the OCDs, it is only OCD and not body dysmorphic disorder that have been investigated. In considering this intervention in children and young people, it is important that all the typical provisions and considerations be in place for prescribing and monitoring antipsychotics. It is also important to have a shared understanding with the young person and their family about the prospects for improvement. The data supports the use of low dose antipsychotic augmentation of an SSRI only. Approximately one-third of treatment-resistant cases of OCD will show an additional response to adding low dose antipsychotics to the SSRI . It is important when starting this intervention to adopt an open and transparent approach. Two-thirds of patients will not show an improvement. It is therefore important to set a time limit for this intervention of around 2–3 months and then discontinue the intervention if there is no response. It is ideal that the intervention is monitored with an appropriate OCD measure, such as the Children’s Obsessive Compulsive Inventory or the Children’s Yale-Brown Obsessive Compulsive Scale . Often clinicians can feel tempted to increase the dose of atypical antipsychotic, rather than being clear that if the patient is going to be in the minority that see this effect, it is likely to be with low dose atypical antipsychotic augmentation of the SSRI. It is often important to be clear to declare the trial of treatment as ineffective and to discontinue. When treatment for OCD have been suboptimal, children, young people and their parents often ask for additional switches of medication. This can be an important moment in clinical practice, to discuss efforts around psychological therapy. Where medications have not brought about the required improvements, young people and their families need to understand that multiple further switches of psychopharmacology are unlikely to be helpful. It can be in these moments that one can renegotiate to redouble efforts around CBT for OCD. What drug treatment should I use if a patient has Borderline Personality Disorder (BPD)? Personality disorders can and do occur in people under 18 and no diagnostic criteria say they cannot. Patients can find having an accurate diagnostic label helpful; the correct diagnosis helps clinicians to give appropriate treatments and the correct diagnosis leads to a more realistic prognosis being given . Making such a diagnosis is not straightforward, given the complexity of adolescence, and is often better done over multiple assessments; it is also easier to do if the adolescent is older. The majority of evidence on personality disorders is for emotionally unstable/borderline PD. There is no evidence that any medication is effective for the core symptoms of BPD in adults or adolescents. There is limited evidence that medication is effective for co-morbid mental illness in adults with BPD and no evidence for this in adolescents. Instead, the mainstay of treatment of BPD (in all age groups) is complex and integrates psychological therapy programmes. In particular dialectical behaviour therapy and mentalisation-based treatment have been found to be effective in adolescents with BPD although studies were limited by including mainly girls and only including the subset of patients with BPD who were happy to be in a treatment trial . It is also essential to address social factors (e.g., current abuse, educational issues, stability of placement) in helping these patients. Personality disorders can and do occur in people under 18 and no diagnostic criteria say they cannot. Patients can find having an accurate diagnostic label helpful; the correct diagnosis helps clinicians to give appropriate treatments and the correct diagnosis leads to a more realistic prognosis being given . Making such a diagnosis is not straightforward, given the complexity of adolescence, and is often better done over multiple assessments; it is also easier to do if the adolescent is older. The majority of evidence on personality disorders is for emotionally unstable/borderline PD. There is no evidence that any medication is effective for the core symptoms of BPD in adults or adolescents. There is limited evidence that medication is effective for co-morbid mental illness in adults with BPD and no evidence for this in adolescents. Instead, the mainstay of treatment of BPD (in all age groups) is complex and integrates psychological therapy programmes. In particular dialectical behaviour therapy and mentalisation-based treatment have been found to be effective in adolescents with BPD although studies were limited by including mainly girls and only including the subset of patients with BPD who were happy to be in a treatment trial . It is also essential to address social factors (e.g., current abuse, educational issues, stability of placement) in helping these patients. Can I diagnose and treat schizophrenia in adolescents who are consuming illicit drugs? Schizophrenia has a rise in prevalence during adolescence, which is also the time when drug consumption rises. Cannabis, particularly high-potency cannabis, is a common precipitant or contributor to psychosis in young people . Youth experiencing psychosis also frequently misuse substances, which makes the differential diagnosis between substance-induced psychosis and primary psychotic disorders challenging. NICE guidelines advise inquiry about the use of alcohol and illicit drugs when assessing a young person with psychosis and, if present, about particular substances and patterns of use . Performing a drug urine test may be also useful in acute psychosis. In drug-induced psychosis, psychotic symptoms present in the context of acute intoxication or withdrawal from substances, with a gradual recovery and remission within 1 month of sustained abstinence , which may not always be possible to achieve. Regarding differential clinical features with primary psychotic disorders, later age of onset, fewer negative symptoms, greater insight and less frequent family history of psychosis have been reported more frequently in drug-induced psychosis than in primary psychosis , although differential diagnoses with primary psychotic disorders cannot be made based solely on them. If diagnostic criteria for schizophrenia are met in an adolescent with a substance-use disorder, both diagnoses should be made. Integrated care, in which treatment for both disorders is provided simultaneously by the same clinician in community-based mental health teams, should be provided . There is no evidence-based guidance on choice of antipsychotic in adolescents with schizophrenia and a co-occurring substance-use disorder. However, this comorbidity complicates management and prognosis, increasing relapse rates and worsening medication adherence . Long-acting injectable (so-called) antipsychotics have shown some efficacy in adults with these conditions and may be an option in older adolescents . Antipsychotic treatment may also be considered if symptoms of psychosis persist in adolescents who are not able to maintain abstinence . How do I treat an adolescent with treatment-resistant schizophrenia? Treatment-resistant schizophrenia is defined by persistence of significant symptoms, despite adequate treatment with at least two different antipsychotics, lasting 6 weeks each, and during which there was no appreciable symptomatic or functional improvement, as measured by validated rating instruments . It can be present from the beginning of therapy or develop over time, often after relapses. Before considering that a patient has a treatment-resistant schizophrenia, we need to investigate possible causes of lack of effectiveness of prescribed treatments . After reviewing diagnosis, we should document adequate treatment adherence and make sure that optimal medication dosages have been used. In children and adolescents, antipsychotics should not be dosed according to weight, but rather using a ‘start slow, go slow’ approach, raising the dose until reaching recommended maximum doses or until adverse events appear. If a trial cannot be completed at the adequate dose, then another treatment should be tried until two adequate trials of different antipsychotics are completed. Adequate treatment of medical and psychiatric comorbidities should also be implemented (medication or psychosocial interventions as needed), paying attention to potential drug interactions that may increase adverse effects or diminish efficacy of antipsychotic medication. According to NICE guidelines, clozapine should be offered to children and young people with treatment-resistant schizophrenia . This recommendation is based on evidence of clozapine being significantly more effective than other antipsychotics in children and adolescents . Clozapine monitoring trough plasma levels may help to guide dosing. Greatest efficacy is seen at levels ⩾350 µg/L, although this level is not universal, and factors such as gender, inflammation or tobacco or caffeine consumption may alter clozapine plasma levels . Initiation of clozapine requires a careful balance of the risk/benefit ratio and commitment to the increased monitoring requirements by patient, family and psychiatrist. Psychiatrists should adopt a proactive approach on assessment, intervention and reassurance for patients and families, as most side effects will appear during the first weeks of treatment and could be managed without discontinuation . How do I manage weight increase related to the use of antipsychotic medications in children and adolescents? Antipsychotics are associated with weight gain and adverse metabolic effects, particularly in children and adolescents, although mechanisms driving these effects are still largely unknown . However, not all antipsychotics carry the same risk. In randomised clinical trials clozapine, olanzapine and quetiapine showed the highest weight gain, while molindone, lurasidone and ziprasidone (which was not more efficacious than placebo) induced less weight gain . In observational studies, olanzapine and clozapine displayed the highest risk of weight gain, followed by risperidone, quetiapine and aripiprazole, and ziprasidone was associated with no weight gain . Longer time in treatment and being drug-naïve also increased risk, with a ceiling effect determined by higher baseline BMI values . Therefore, as there are no differences in efficacy (except for clozapine and ziprasidone), choosing an antipsychotic with less potential for weight increase, particularly if this is the first antipsychotic trial, together with education about strategies to enhance appropriate nutritional style and physical activity, and regular weight monitoring may help to limit weight gain. In case weight gain is established, treatment options could include switching to a less orexigenic/metabolically adverse antipsychotic, adjunctive behavioural treatments and adjunctive pharmacologic interventions. The IMPACT trial , enrolled obese/overweight youth 8–19 years with psychotic disorders and with weight gain after treatment with antipsychotics, and randomised them to adjunctive metformin, antipsychotic switch or control intervention, with all arms receiving healthy lifestyle education. Both active interventions were significantly different compared to the control condition, with no differences between them, but with more gastrointestinal problems in the metformin group. However, weight reduction with aripiprazole and metformin was modest, highlighting the importance of considering potential risks and benefits prior to antipsychotic initiation. Switching strategy to newly FDA approved agents with better weight and metabolic profile such as lurasidone – a full antagonist at dopamine D2 and 5-HT2A and 5-HT7 receptors, or brexpiprazole – a D2 and D3, as well as 5-HT1A receptor partial agonist could be an option, although data supporting this option are still not available. Schizophrenia has a rise in prevalence during adolescence, which is also the time when drug consumption rises. Cannabis, particularly high-potency cannabis, is a common precipitant or contributor to psychosis in young people . Youth experiencing psychosis also frequently misuse substances, which makes the differential diagnosis between substance-induced psychosis and primary psychotic disorders challenging. NICE guidelines advise inquiry about the use of alcohol and illicit drugs when assessing a young person with psychosis and, if present, about particular substances and patterns of use . Performing a drug urine test may be also useful in acute psychosis. In drug-induced psychosis, psychotic symptoms present in the context of acute intoxication or withdrawal from substances, with a gradual recovery and remission within 1 month of sustained abstinence , which may not always be possible to achieve. Regarding differential clinical features with primary psychotic disorders, later age of onset, fewer negative symptoms, greater insight and less frequent family history of psychosis have been reported more frequently in drug-induced psychosis than in primary psychosis , although differential diagnoses with primary psychotic disorders cannot be made based solely on them. If diagnostic criteria for schizophrenia are met in an adolescent with a substance-use disorder, both diagnoses should be made. Integrated care, in which treatment for both disorders is provided simultaneously by the same clinician in community-based mental health teams, should be provided . There is no evidence-based guidance on choice of antipsychotic in adolescents with schizophrenia and a co-occurring substance-use disorder. However, this comorbidity complicates management and prognosis, increasing relapse rates and worsening medication adherence . Long-acting injectable (so-called) antipsychotics have shown some efficacy in adults with these conditions and may be an option in older adolescents . Antipsychotic treatment may also be considered if symptoms of psychosis persist in adolescents who are not able to maintain abstinence . Treatment-resistant schizophrenia is defined by persistence of significant symptoms, despite adequate treatment with at least two different antipsychotics, lasting 6 weeks each, and during which there was no appreciable symptomatic or functional improvement, as measured by validated rating instruments . It can be present from the beginning of therapy or develop over time, often after relapses. Before considering that a patient has a treatment-resistant schizophrenia, we need to investigate possible causes of lack of effectiveness of prescribed treatments . After reviewing diagnosis, we should document adequate treatment adherence and make sure that optimal medication dosages have been used. In children and adolescents, antipsychotics should not be dosed according to weight, but rather using a ‘start slow, go slow’ approach, raising the dose until reaching recommended maximum doses or until adverse events appear. If a trial cannot be completed at the adequate dose, then another treatment should be tried until two adequate trials of different antipsychotics are completed. Adequate treatment of medical and psychiatric comorbidities should also be implemented (medication or psychosocial interventions as needed), paying attention to potential drug interactions that may increase adverse effects or diminish efficacy of antipsychotic medication. According to NICE guidelines, clozapine should be offered to children and young people with treatment-resistant schizophrenia . This recommendation is based on evidence of clozapine being significantly more effective than other antipsychotics in children and adolescents . Clozapine monitoring trough plasma levels may help to guide dosing. Greatest efficacy is seen at levels ⩾350 µg/L, although this level is not universal, and factors such as gender, inflammation or tobacco or caffeine consumption may alter clozapine plasma levels . Initiation of clozapine requires a careful balance of the risk/benefit ratio and commitment to the increased monitoring requirements by patient, family and psychiatrist. Psychiatrists should adopt a proactive approach on assessment, intervention and reassurance for patients and families, as most side effects will appear during the first weeks of treatment and could be managed without discontinuation . Antipsychotics are associated with weight gain and adverse metabolic effects, particularly in children and adolescents, although mechanisms driving these effects are still largely unknown . However, not all antipsychotics carry the same risk. In randomised clinical trials clozapine, olanzapine and quetiapine showed the highest weight gain, while molindone, lurasidone and ziprasidone (which was not more efficacious than placebo) induced less weight gain . In observational studies, olanzapine and clozapine displayed the highest risk of weight gain, followed by risperidone, quetiapine and aripiprazole, and ziprasidone was associated with no weight gain . Longer time in treatment and being drug-naïve also increased risk, with a ceiling effect determined by higher baseline BMI values . Therefore, as there are no differences in efficacy (except for clozapine and ziprasidone), choosing an antipsychotic with less potential for weight increase, particularly if this is the first antipsychotic trial, together with education about strategies to enhance appropriate nutritional style and physical activity, and regular weight monitoring may help to limit weight gain. In case weight gain is established, treatment options could include switching to a less orexigenic/metabolically adverse antipsychotic, adjunctive behavioural treatments and adjunctive pharmacologic interventions. The IMPACT trial , enrolled obese/overweight youth 8–19 years with psychotic disorders and with weight gain after treatment with antipsychotics, and randomised them to adjunctive metformin, antipsychotic switch or control intervention, with all arms receiving healthy lifestyle education. Both active interventions were significantly different compared to the control condition, with no differences between them, but with more gastrointestinal problems in the metformin group. However, weight reduction with aripiprazole and metformin was modest, highlighting the importance of considering potential risks and benefits prior to antipsychotic initiation. Switching strategy to newly FDA approved agents with better weight and metabolic profile such as lurasidone – a full antagonist at dopamine D2 and 5-HT2A and 5-HT7 receptors, or brexpiprazole – a D2 and D3, as well as 5-HT1A receptor partial agonist could be an option, although data supporting this option are still not available. Is it best to replace methylphenidate with guanfacine or use it as adjunct when ADHD is present but tics require treatment as well? As both methylphenidate and guanfacine are licensed for the treatment of ADHD, both can be helpful and effective treatments for this condition; however, the co-occurrence of tics is unlikely to be treated adequately using methylphenidate. While treating ADHD with methylphenidate can have beneficial effects on associated symptoms that increase tics (frustration, impulsivity, emotional dysregulation, etc.), there does not appear to be a direct effect on tics. The clinical decision whether to switch to guanfacine or to augment methylphenidate with guanfacine is based on: (i) the relative severity of ADHD and tic symptoms, (ii) whether methylphenidate is effectively managing the ADHD symptoms, and (iii) whether methylphenidate appears to be worsening tics. If methylphenidate is already effective in treating ADHD symptoms, but tics worsen and/or require treatment, guanfacine may be added. In cases of mild to moderate ADHD symptoms with co-existing tics, and/or if methylphenidate is only partially effective or poorly tolerated, discontinuation of methylphenidate and switching to guanfacine may be considered. If tics improve with guanfacine alone but ADHD worsens, then methylphenidate may be restarted. In the cases of mild-to-moderate ADHD with co-existing tics (where both require treatment) there is an argument to suggest that guanfacine monotherapy should be considered as the first-line medication choice rather than methylphenidate. However, there remains clinical uncertainty about the best treatment choice in these circumstances, and in the UK the current HTA SATURN trial ( https://fundingawards.nihr.ac.uk/award/NIHR128472 ) is examining this question in a head-to-head comparison of guanfacine versus methylphenidate for ADHD. In case of moderate-to-severe ADHD with tics, ADHD symptoms are likely to require treatment with psychostimulants which are the first-line treatment choice for ADHD. Therefore, in cases such as this where tics are also present, augmenting methylphenidate with guanfacine to cover both the symptoms of both conditions seems preferable. Using guanfacine in this respect also may benefit ADHD symptoms as shown by . Adverse events were generally mild to moderate, and combined treatment showed no differences in safety or tolerability. While appreciating that when prescribing for children and young people it is best for children to receive as few medications as possible at the lowest dose to achieve the desired clinical goals, when there is clear impairment from both disorders, it may be beneficial to combine treatment. Therefore, especially when ADHD symptoms are more severe and impairing, a combination of methylphenidate and guanfacine is worth considering. The addition of guanfacine can also potentially benefit sleep difficulties and night-time settling, and decrease the need for alternatives such as melatonin which may be beneficial to consider. When would I consider antipsychotics more appropriate than clonidine/guanfacine when treating tics in children and young people? In a recent prescribing survey among 59 European expert clinicians who were members of the European Society for the Study of Tourette Syndrome the most commonly used medications for tics in children and adolescents in descending order were aripiprazole, clonidine, tiapride (not available in the United Kingdom) and guanfacine. Newer (so called) antipsychotics (e.g. risperidone and aripiprazole) and noradrenergic agents (e.g. clonidine and guanfacine) have increasingly been favoured over the older so-called antipsychotic drugs (e.g. pimozide, sulpiride and haloperidol, all D2 receptor antagonists). Although a recent systematic review and meta-analysis shows that the efficacy for tic treatment appears similar between noradrenergic and antipsychotic medications (Hollis et al., 2016; ), the Maudsley prescribing guidelines state that because of the more serious adverse effects of antipsychotics, it is recommended that noradrenergic drugs (i.e. clonidine and guanfacine) are used first-line. The Maudsley guidelines go on to however recognise that the antipsychotic medications may be more beneficial in some individuals, although there is no clear guidance as to which group of children this might pertain to. One possible advantage of using antipsychotic medications over other options is that medication such as risperidone or aripiprazole could have a more favourable effect on some of the behavioural symptoms associated with tic disorders and on the common comorbidities such as ASD and OCD. Risperidone, particularly, has proven efficacy in ameliorating aggressive behaviour and therefore in the case of challenging behaviour either linked to autism spectrum disorder or ADHD, risperidone and aripiprazole are likely to have more significant effects on aggression than noradrenergic counterparts. It is worth noting though that noradrenergic options such as clonidine or guanfacine are likely to have much more of an effect on ADHD core symptoms, and noradrenergic drugs are recommended as the first-line option for tic disorders with co-morbid ADHD in the recent European Guidelines . Risperidone also has a greater affinity for 5-HT receptors and is more likely to augment serotonergic agents in the treatment of obsessive-compulsive symptoms (OCSs). Therefore, when a combined Tic and OCS presentation is present and obsessive and/or compulsive symptoms are equally problematic, antipsychotics have often been used alongside SSRI’s. This effect has also been shown to be present with aripiprazole, and the partial agonist profile of this medication may point to it being more effective on mood and OCS difficulties. One of the potential side effects of noradrenergic agents is depression . Typically, antipsychotic medications are also more sedating than noradrenergic alternatives. This might, for example, be beneficial to consider in young people who already using sedating medications such as promethazine. Of note, noradrenergic agents are contraindicated in individuals who experience severe bradyarrhythmia secondary to second- or third-degree AV block or sick sinus syndrome. Antipsychotic medications are not contraindicated in this type of patients, although cardiology opinion would be essential before prescribing antipsychotic medication in this client group. Aripiprazole has the least cardiac side effect profile of the antipsychotic medications especially related to QT prolongation. When clonidine is being used in higher doses is it best to go beyond treatment window or consider a switch to guanfacine? If switching, is it best to reduce and withdraw clonidine before initiating guanfacine or can you cross-taper? Recommended clonidine therapeutic doses are in the order of 3–5 mcg/kg/day. It is uncommon to use doses beyond 300 mcg daily, due to the sedating and hypotensive effects of clonidine. However, some young people may be able to tolerate the cardiac effects of clonidine well and find that the sedation has additional benefits to sleep and comorbidities. Therefore, in these children, higher doses may be beneficial with regular monitoring of blood pressure and pulse. As clonidine can also be divided into two or three doses, quite often children who are sensitive to guanfacine side effects find that a BD or TDS regime of clonidine suits them better than the once daily dosing of guanfacine. Clonidine is 10 times more potent than guanfacine at alpha-2 presynaptic receptors, whereas guanfacine appears to be more potent at post synaptic receptors, which translates into increased prefrontal activity and impulse control regulation and improvement in behaviour regulation . Therefore, in the case of comorbidities where behavioural modification is desired, guanfacine seems to have a better effect. Guanfacine weight dosing guidelines indicate 0.05–0.08 mg/kg/day suggesting that a rough equivalency is: Guanfacine 1 mg = Clonidine 100 mcg. If it is clinically acceptable from a tic perspective to reduce and withdrawal clonidine first, before initiating guanfacine, this is likely to be best practise, as the side effects will be additive. Reducing clonidine by 25 mg every 3–5 days is thought to be appropriate to facilitate this reduction. However, it may not be beneficial from a tic/sleep/behaviour perspective to reduce and withdraw clonidine completely before starting guanfacine. In these circumstances, cross-tapering is recommended by reducing clonidine to 100 mcg daily before initiating guanfacine at 1mg daily. Following on from this, all remaining clonidine doses are reduced by 25 mcg every 3 days before guanfacine is increased as necessary. As both methylphenidate and guanfacine are licensed for the treatment of ADHD, both can be helpful and effective treatments for this condition; however, the co-occurrence of tics is unlikely to be treated adequately using methylphenidate. While treating ADHD with methylphenidate can have beneficial effects on associated symptoms that increase tics (frustration, impulsivity, emotional dysregulation, etc.), there does not appear to be a direct effect on tics. The clinical decision whether to switch to guanfacine or to augment methylphenidate with guanfacine is based on: (i) the relative severity of ADHD and tic symptoms, (ii) whether methylphenidate is effectively managing the ADHD symptoms, and (iii) whether methylphenidate appears to be worsening tics. If methylphenidate is already effective in treating ADHD symptoms, but tics worsen and/or require treatment, guanfacine may be added. In cases of mild to moderate ADHD symptoms with co-existing tics, and/or if methylphenidate is only partially effective or poorly tolerated, discontinuation of methylphenidate and switching to guanfacine may be considered. If tics improve with guanfacine alone but ADHD worsens, then methylphenidate may be restarted. In the cases of mild-to-moderate ADHD with co-existing tics (where both require treatment) there is an argument to suggest that guanfacine monotherapy should be considered as the first-line medication choice rather than methylphenidate. However, there remains clinical uncertainty about the best treatment choice in these circumstances, and in the UK the current HTA SATURN trial ( https://fundingawards.nihr.ac.uk/award/NIHR128472 ) is examining this question in a head-to-head comparison of guanfacine versus methylphenidate for ADHD. In case of moderate-to-severe ADHD with tics, ADHD symptoms are likely to require treatment with psychostimulants which are the first-line treatment choice for ADHD. Therefore, in cases such as this where tics are also present, augmenting methylphenidate with guanfacine to cover both the symptoms of both conditions seems preferable. Using guanfacine in this respect also may benefit ADHD symptoms as shown by . Adverse events were generally mild to moderate, and combined treatment showed no differences in safety or tolerability. While appreciating that when prescribing for children and young people it is best for children to receive as few medications as possible at the lowest dose to achieve the desired clinical goals, when there is clear impairment from both disorders, it may be beneficial to combine treatment. Therefore, especially when ADHD symptoms are more severe and impairing, a combination of methylphenidate and guanfacine is worth considering. The addition of guanfacine can also potentially benefit sleep difficulties and night-time settling, and decrease the need for alternatives such as melatonin which may be beneficial to consider. In a recent prescribing survey among 59 European expert clinicians who were members of the European Society for the Study of Tourette Syndrome the most commonly used medications for tics in children and adolescents in descending order were aripiprazole, clonidine, tiapride (not available in the United Kingdom) and guanfacine. Newer (so called) antipsychotics (e.g. risperidone and aripiprazole) and noradrenergic agents (e.g. clonidine and guanfacine) have increasingly been favoured over the older so-called antipsychotic drugs (e.g. pimozide, sulpiride and haloperidol, all D2 receptor antagonists). Although a recent systematic review and meta-analysis shows that the efficacy for tic treatment appears similar between noradrenergic and antipsychotic medications (Hollis et al., 2016; ), the Maudsley prescribing guidelines state that because of the more serious adverse effects of antipsychotics, it is recommended that noradrenergic drugs (i.e. clonidine and guanfacine) are used first-line. The Maudsley guidelines go on to however recognise that the antipsychotic medications may be more beneficial in some individuals, although there is no clear guidance as to which group of children this might pertain to. One possible advantage of using antipsychotic medications over other options is that medication such as risperidone or aripiprazole could have a more favourable effect on some of the behavioural symptoms associated with tic disorders and on the common comorbidities such as ASD and OCD. Risperidone, particularly, has proven efficacy in ameliorating aggressive behaviour and therefore in the case of challenging behaviour either linked to autism spectrum disorder or ADHD, risperidone and aripiprazole are likely to have more significant effects on aggression than noradrenergic counterparts. It is worth noting though that noradrenergic options such as clonidine or guanfacine are likely to have much more of an effect on ADHD core symptoms, and noradrenergic drugs are recommended as the first-line option for tic disorders with co-morbid ADHD in the recent European Guidelines . Risperidone also has a greater affinity for 5-HT receptors and is more likely to augment serotonergic agents in the treatment of obsessive-compulsive symptoms (OCSs). Therefore, when a combined Tic and OCS presentation is present and obsessive and/or compulsive symptoms are equally problematic, antipsychotics have often been used alongside SSRI’s. This effect has also been shown to be present with aripiprazole, and the partial agonist profile of this medication may point to it being more effective on mood and OCS difficulties. One of the potential side effects of noradrenergic agents is depression . Typically, antipsychotic medications are also more sedating than noradrenergic alternatives. This might, for example, be beneficial to consider in young people who already using sedating medications such as promethazine. Of note, noradrenergic agents are contraindicated in individuals who experience severe bradyarrhythmia secondary to second- or third-degree AV block or sick sinus syndrome. Antipsychotic medications are not contraindicated in this type of patients, although cardiology opinion would be essential before prescribing antipsychotic medication in this client group. Aripiprazole has the least cardiac side effect profile of the antipsychotic medications especially related to QT prolongation. Recommended clonidine therapeutic doses are in the order of 3–5 mcg/kg/day. It is uncommon to use doses beyond 300 mcg daily, due to the sedating and hypotensive effects of clonidine. However, some young people may be able to tolerate the cardiac effects of clonidine well and find that the sedation has additional benefits to sleep and comorbidities. Therefore, in these children, higher doses may be beneficial with regular monitoring of blood pressure and pulse. As clonidine can also be divided into two or three doses, quite often children who are sensitive to guanfacine side effects find that a BD or TDS regime of clonidine suits them better than the once daily dosing of guanfacine. Clonidine is 10 times more potent than guanfacine at alpha-2 presynaptic receptors, whereas guanfacine appears to be more potent at post synaptic receptors, which translates into increased prefrontal activity and impulse control regulation and improvement in behaviour regulation . Therefore, in the case of comorbidities where behavioural modification is desired, guanfacine seems to have a better effect. Guanfacine weight dosing guidelines indicate 0.05–0.08 mg/kg/day suggesting that a rough equivalency is: Guanfacine 1 mg = Clonidine 100 mcg. If it is clinically acceptable from a tic perspective to reduce and withdrawal clonidine first, before initiating guanfacine, this is likely to be best practise, as the side effects will be additive. Reducing clonidine by 25 mg every 3–5 days is thought to be appropriate to facilitate this reduction. However, it may not be beneficial from a tic/sleep/behaviour perspective to reduce and withdraw clonidine completely before starting guanfacine. In these circumstances, cross-tapering is recommended by reducing clonidine to 100 mcg daily before initiating guanfacine at 1mg daily. Following on from this, all remaining clonidine doses are reduced by 25 mcg every 3 days before guanfacine is increased as necessary. We hope that this article will be helpful for prescribers in their daily clinical practice. We also hope that in the near future, additional high-quality evidence, applicable at the individual patient, rather than group level, will inform the answers to these and other important questions, within the framework of a precision psychiatry approach . |
Impact Absorption Power of Polyolefin Fused Filament Fabrication | 33cb5fce-7d82-42c3-be0f-90ff0052f0fc | 11907222 | Dentistry[mh] | Introduction The evolution of sports mouthguards from rudimentary constructs in the 1800s to sophisticated, custom‐engineered devices today underscores the enduring commitment to innovation and safety in sports . Thomas Carlos's contributions in the 1930s marked a pivotal moment, laying the groundwork for modern mouthguard technology, which now serves as a staple protective measure for athletes across diverse sporting disciplines . Amid this evolution, material advancements have played a pivotal role, particularly in enhancing impact absorption and force dispersion, vital for safeguarding both soft and hard tissues within the oral cavity . Ethylene vinyl acetate (EVA) stands out as a predominant material in contemporary sports mouthguard production due to its demonstrated efficacy . However, the pursuit of further advancements prompts exploration into alternative materials, with polyolefin (PO) fused filament fabrication (FFF) emerging as a notable contender in recent material science developments . The FFF process entails loading PO filament into a 3D printer, where it is conveyed through a heated extruder, transforming into a semi‐liquid state . Layer by layer, according to specifications delineated in a computer‐aided design (CAD) file, the molten PO is extruded onto a build platform. Each layer rapidly cools and solidifies upon deposition, forming a cohesive structure. POs are prized for their flexibility, chemical resistance, and cost‐effectiveness, yet printing with these materials presents unique challenges. Their elevated melting point relative to common FFF materials like PLA or ABS complicates the printing process while achieving robust interlayer adhesion is hindered by the low surface energy of POs . Consequently, meticulous calibration of printer parameters and meticulous surface preparation are essential for successful PO FFF printing. To ascertain optimal protection for athletes through sports mouthguards, a comprehensive examination of both conventional and innovative materials is imperative. By scrutinizing established options like EVA alongside novel choices like PO FFF, researchers and dental professionals can acquire a nuanced comprehension of each material's attributes and limitations . This multifaceted research equips stakeholders with the requisite knowledge to effectively mitigate the incidence of oro‐facial injuries among athletes, thereby fostering safer sporting environments for all involved parties. The purpose of this study is to assess the energy absorption capabilities of both molded and 3D‐printed materials, specifically focusing on their impact absorption reliability in the context of mouthguard fabrication. This research investigates whether additive manufacturing technology, using 3D‐printed PO‐based materials, can achieve impact energy absorption properties comparable to or better than traditional molded EVA materials commonly used in mouthguards. The study hypothesizes that 3D‐printed PO mouthguards will demonstrate superior impact energy absorption, as evaluated by the Izod impact test, compared to EVA mouthguards. Conversely, the null hypothesis posits no significant difference in impact energy absorption between 3D‐printed PO‐based mouthguards and traditional molded EVA materials under Izod impact testing conditions.
Materials and Methods Six material samples were prepared to quantify impact energy absorption following a modification of Test Method A of the American Society for Testing and Materials (ASTM) D256, “Standard Test Methods for Determining the Izod Pendulum Impact Resistance of Plastics,” as a reference test standard. This testing was conducted at the Ohio State University's Center for Design and Manufacturing Excellence (CDME) in collaboration with The Ohio State University College of Dentistry. The materials evaluated in this in vitro study consisted of five commercially available examples of EVA, EVA hybrid, and PO 3D‐printed polymer. The actual EVA and PO content of commercially available products is proprietary. EVA hybrid is the combination of EVA with polyurethane, the combination of the materials makes this a hybrid . The 3D‐printed material is fabricated using the FFF technique with the Prusa Research Original i3 MK3S (Prusa Research, Prague, Czech Republic) printer. Modifying the testing protocol was necessary due to sample size constraints, resulting in the reporting of comparative impact strength values. The outer dimensions of specimens to ASTM D256 are specified to a length of 2.5 in (63.5 mm) and a height of 0.5 in (12.5 mm). The width of specimens may be between 0.118 in (3.0 mm) and 0.5 in (12.5 mm). The samples in this experiment are scaled to 50% of the standard height, using a quarter inch (6.4 mm) with no changes in other directions. The chosen 6.4 mm thickness for ASTM testing deviates from the recommended 4 mm thickness for mouthguards due to the fact that further reduction to 4 mm risks compromising the reliability of calculations and tests. This adjustment was necessary to balance practical feasibility with scientific rigor in our experimental approach. Testing was carried out using a benchtop Izod Impact Tester (Testing Machines Inc. New Castle, DE, USA), depicted in Figure . Five test samples were fabricated for each material under review (Table ) using the described scaled geometry. For sample fabrication, raw materials were prepared by pressure molding or 3D printing to create a flat sheet at the desired thickness. Pressure‐formed samples were molded using the Drufomat Scan (Dreve America Corp, Eden Prairie, MN, USA) into flat panels approximately 6.4 mm thick and sliced into samples approximately 8 mm wide and 62.5 mm in length (Figure , Table ). The PO material samples were designed using CAD software Rhino 7 (Rhinoceros 3D; TLM Inc. dba Robert McNeel & Associates, Seattle, WA, USA) with the dimensions of 6.4 mm square bars, 125 mm in length. The PO samples are composed of proprietary base polymers compounded in pellet form. The material is then extruded and cooled into filament form. The samples for this study were manufactured by loading the filament on a Prusa i3 MK3S (Prusa Research, Prague, Czech Republic) printer. In the FFF process, the PO polymer filament melts and is extruded layer by layer to build the sample shape according to the design instructions. As it cools, it polymerizes, forming a solid, rigid structure. Once polymerized, the PO has a Shore A hardness of 90–95A at 37°C (98.6°F), or at body temperature. For reference, EVA has a Shore A hardness of approximately 80A . In fact, PO hardness is comparable to thermoplastic elastomers (TPE), which are known for their adaptability and comfort. This demonstrates that PO can indeed be flexible enough to adapt to the teeth and support tissues, making it a suitable candidate for applications requiring both rigidity and adaptability, such as mouthguards. For this testing, samples were notched to a thickness of approximately 80% of the original, or 5.1 mm, using the Instron W‐3551 V‐Notch Milling Cutter (Instron; Norwood, MA, USA) (Figure ). Some variation in the machining of flexible materials was observed. The pressure‐molded samples were not of consistent thickness, unlike the 3D‐printed samples, so the notch depth varies from sample to sample. However, the remaining thickness is the critical value as it is responsible for the impact absorption. Effort was made to maintain sample consistency for the remaining thickness measurements. Actual dimensions for width and thickness are provided in Tables and . These dimensions are taken into account in the equation for impact absorption calculation. Notch depth and sample widths were measured with Mitutoyo 500‐195‐30 (Mitutoyo; Kawasaki, Kanagawa, Japan), an electronic caliper with a 0.01 mm resolution. Considering PO samples were 3D printed using the FFF process, the notches could be placed parallel or perpendicular to the build direction. Notches in one set of five samples were cut vertically, parallel to the build direction (Figure ). The second set of five samples was notched horizontally, perpendicular to the build direction (Figure ). This provided two groups to the PO, one assessing the impact parallel to the build direction [group named PO vertical (POV)] and one assessing the impact perpendicular to the build direction [group named PO horizontal (POH)]. A third option would be possible if samples were printed vertically, the notch could be made in plane, or planar, of build direction (Figure ). However, samples in the current study were printed horizontally, so only notches vertical or horizontal to build direction were assessed. For testing procedure, the samples were loaded into the jaws of the Izod Impact Tester (Testing Machines Inc. New Castle, DE, USA) so that the notch was aligned with the clamping surface and arranged facing the hammer (Figure ). Utilizing W‐3587‐D Izod Tongs (Instron; Norwood, MA, USA) ensures consistent alignment of the notch and perpendicularity of the sample. A 2.0 lb. (8.9 N) hammer installed in the tester impacts the sample at approximately 3.4 m/s. The maximum energy available in this test is 2.0 ft lb. (2.7 J), and the measurement increment is 0.01 ft lb. (0.01 J). The hammer is released from its latch position and strikes the sample; the dial on the machine indicates the energy absorbed. Data acquisition is via the analog dial affixed to the machine (Figure ). The increment for a 2.0 lb. (8.9 N) hammer is 0.01 ft lb. precision. Values were recorded and converted to Joules at 1.4 J per 1 ft lb. The data are expressed in thousandths of a kilojoule per square meter; the unit used for ISO Izod impact toughness. The conversion between units does not diminish or improve the clarity of the analog dial readings and does not affect the precision level for scientific reporting. Recorded measurements ranged from 0.1 to 0.40 ft lb., corresponding to 0.1–0.5 J. The ASTM standard for Izod impact testing reports energy lost per unit of thickness at the notch (J/mm). The following formula was used to convert this to the ISO standard, which reports energy lost per unit of the cross‐sectional area under the notch (J/mm 2 ). Impact Strength J mm 2 = Impact Energy J Notch Thickness mm × Notch Width mm
Results The mean and standard deviation of impact toughness were calculated for each material. For EVA, the mean was 5.4 ± 0.3 kJ/m 2 , and for 3D‐printed materials, the mean was 12.9 ± 0.7 kJ/m 2 . These values were averaged from specific materials (POV and POH for PO) and are summarized in Table . A post hoc power analysis using G*Power software (Universitat Dusseldorf, Dusseldorf, Germany) version 3.1.9.4 indicated a high statistical power of 1.000 (100%), confirming the study's robustness in detecting significant differences between groups. The power for sample sizes 3 and 4 ranged from approximately 0.65 to nearly 0.97, reaching a maximum of 1.0 with a sample size of 5, consistent with a recent study . A bar diagram was created to enable an unbiased examination of the impact absorption capabilities of the different materials (Figure ). A preliminary normality test was performed using Minitab version 21.4.1 (Minitab LLC, State College, PA, USA). The Anderson–Darling test and probability plots yielded p ‐values, as shown in Table . p ‐values greater than 0.20 suggest that the data follow a normal distribution, while values between 0.05 and 0.20 indicate potential deviations. In this study, PS tests had the lowest p ‐value of 0.118, and BD and PR materials had p ‐values of 0.127 and 0.144, respectively. These results suggest that the data generally adhere to a normal distribution, although further data collection could enhance the normality for more robust analyses. Although the data are approximately normally distributed, verifying the data's variance is crucial for selecting appropriate statistical methods. Bartlett's test for equal variance, as depicted in Figure , indicated that the impact toughness of the materials did not have equal variance across samples. Consequently, a two‐sample t ‐test was employed for comparisons, accounting for unequal variance among the data sets. A comparative analysis of five molded materials was conducted. Table and Figure present the results. BD (5.1 ± 0.6 kJ/m 2 ) and DF (5.8 ± 0.3 kJ/m 2 ) did not show significant differences ( p = 0.061), nor did EF (6.2 ± 0.3 kJ/m 2 ) and DF ( p = 0.096). EF and PR (6.4 ± 0.4 kJ/m 2 ) were the most similar ( p = 0.235). PS Izod impact test result was documented at 3.4 ± 0.1 kJ/m 2 . The lack of full rupture or tearing in the molded materials suggests that the impact toughness might be underestimated, as energy absorption could be impacted by incomplete failure or testing conditions . An unequal variance, two‐sample t ‐test compared 3D‐printed PO with various molded materials (BD, DF, EF, PR, and PS). Results indicated that PO had significantly higher Izod impact strength compared to all molded materials ( p = 0.0 at 95% confidence), as summarized in Table and Figure . PO samples demonstrated superior impact toughness, with abnormal failure modes such as tearing and laminar separation observed (Figure ). No such failure modes were detected in molded materials. The impact absorption property of PO is consistent with previous research . A comparison of PO specimens notched vertically (parallel to the build direction) and horizontally (perpendicular to the build direction) revealed a significant difference in impact toughness. The mean toughness differed by 1.7 kJ/m 2 , or over 10%, with a 95% confidence interval of 0.6–2.8 kJ/m 2 . The T ‐test, conducted using Minitab v21.4.1 software, confirmed this significant difference ( p = 0.009, Figure ). The results indicate that horizontal samples exhibited significantly higher impact toughness compared to vertical samples, highlighting the impact of build direction on ISO Izod impact absorption.
Discussion Custom mouthguards are essential for protecting the stomatognathic system from impact forces while ensuring proper retention and comfort for the wearer . Polymeric materials, particularly EVA, are widely used in mouthguard fabrication due to their favorable mechanical and biological properties . However, alternative materials like polyvinyl chloride, latex rubber, acrylic resin, and polyurethane have also been utilized . With the advent of additive manufacturing and CAD/computer‐aided manufacturing (CAD/CAM) dentistry, there has been a growing interest in exploring new materials and fabrication methods to enhance the performance of mouthguards . This study specifically investigated the application of FFF 3D printing in producing PO sports mouthguards, employing the ASTM D256 testing methodology to compare their impact resistance with that of traditional molded materials. The results revealed that PO FFF‐printed mouthguards exhibited significantly higher Izod impact strength values than their counterparts molded from BD, DF, EF, PR, and PS. These findings align with prior research demonstrating the superior resilience of multimaterial 3D‐printed samples and enhanced energy dissipation in 3D‐printed materials compared to EVA . A critical observation from this study was the occurrence of tearing or laminar separation within the layers of 3D‐printed samples during impact testing, as shown in Figure . This raises important questions about the structural integrity of 3D‐printed mouthguards under impact conditions. In FFF 3D printing, the strength of the object relies on the bonding between layers, which is achieved through thermal bonding—where extruded semi‐molten material fuses with the layer below—and mechanical interlocking, where the extruded material slightly deforms the underlying surface. However, the bonding interface can be a weak point when subjected to disruptive forces, potentially leading to issues such as laminar separation. During ASTM D256 testing, the hammer's impact induces bending, generating tension at the notch—the thinnest part of the sample—which can lead to tearing or cracking. Conversely, compression on the opposite side of the sample may cause buckling, resulting in laminar separation in the final layer. This deformation highlights the high‐impact absorption capabilities of 3D‐printed materials, which can lead to distortion not typically observed in traditional materials. This finding underscores the need to balance impact strength with structural integrity in the design and fabrication of 3D‐printed mouthguards. Compression‐molded composite materials did not exhibit cracking or tearing, indicating a potential advantage in structural robustness due to their manufacturing process. However, this must be weighed against the significantly higher impact resistance demonstrated by 3D‐printed mouthguards. The trade‐off between maintaining structural integrity and optimizing impact resistance is a critical consideration in selecting materials for mouthguards, particularly as the performance under real‐world conditions is a primary concern. Figure illustrates the substantial variance in impact absorption between 3D‐printed and molded materials, highlighting consistent patterns in the data that reinforce the statistical findings. Notably, the study found a significant difference in impact strength between vertically notched (aligned parallel to the build direction) and horizontally notched (perpendicular to the build direction) 3D‐printed PO specimens. This directional sensitivity, with a variance exceeding 10% and a statistically significant p ‐value of 0.009, suggests that the manufacturing orientation plays a crucial role in the material's response to impact forces. This finding holds important implications for the design and manufacturing of 3D‐printed mouthguards, underscoring the need to meticulously control build orientation during fabrication. Mouthguards produced with perpendicular build orientations exhibit improved impact absorption, highlighting orientation as a critical factor in optimizing performance. In contrast, this issue does not apply to compression‐molded materials, which do not involve bonded layers and are therefore not susceptible to laminar separation. The study also highlighted limitations in the ASTM D256 test for evaluating the impact resistance of mouthguard materials. The occurrence of laminar separation in 3D‐printed samples suggests that this test may not fully replicate the real‐world conditions experienced by mouthguards during sports activities. The explicit values reported for Izod impact toughness may not entirely reflect the materials' performance, as the samples did not fully separate during testing. Additionally, when the sample flexes during testing, friction between the hammer and the sample can lead to overreporting of the absorbed energy. This raises questions about the suitability of the test method for evaluating these materials, as the specimen shape used in the test might not align well with the properties of soft mouthguard materials. For instance, conducting impact tests at low temperatures, as commonly done in other industries, could enhance fracture mechanics and reliability, although the relevance to dental mouthguards remains uncertain. Alternatively, exploring the tear strength of mouthguard materials using ASTM D624, Standard Test Method for Tear Strength of Conventional Vulcanized Rubber and Thermoplastic Elastomers, could offer a more accurate assessment. This method, which involves specialized molds or cutting dies to form specimen geometry, may provide a better understanding of the materials' tear resistance under conditions more representative of their use in sports. Another important consideration is the potential for 3D printing to revolutionize mouthguard manufacturing by enabling the production of a single protective splint that can be printed and duplicated while maintaining consistent dimensions . Given that intraoral appliances, including mouthguards, are generally recommended for frequent replacement due to hygiene concerns , the ability to easily reproduce identical mouthguards offers significant practical advantages. Furthermore, research has highlighted the ongoing need to establish effective cleaning standards for thermoplastic appliances, and 3D printing could facilitate a more hygienic and cost‐effective production process . While FFF 3D printers are affordable and widely available, investigating alternative 3D printing technologies that can produce more homogeneous components could present intriguing possibilities. Stereolithography (SLA), for example, which uses light to cure a liquid resin, is already employed for 3D printing occlusal splints and nightguards . SLA's capability to produce high‐resolution objects with intricate shapes and undercuts makes it particularly suitable for dental applications, including mouthguards. Further research into impact absorption and the development of novel materials using such advanced techniques could lead to significant improvements in mouthguard performance. Despite the valuable insights provided, this study has several limitations. It focused solely on impact absorption properties using the Izod impact test, neglecting other important factors such as durability, comfort, and testing environment. The limited sample size and range of materials tested may restrict the generalizability of the findings to other mouthguard types and manufacturing techniques. Additionally, the study's in vitro design does not account for real‐world conditions or dynamic sporting environments, potentially affecting material performance differently. Although ASTM D256 is a standard impact resistance test, its relevance for mouthguard materials may be limited, as shown by laminar separation in 3D‐printed samples. Lastly, the study did not evaluate the long‐term performance or clinical outcomes of the mouthguards, which are essential for understanding their effectiveness in preventing orofacial injuries. Overall, these limitations should be considered when interpreting and applying the study's results. The study's results indicate that 3D‐printed PO mouthguards demonstrate significantly higher impact toughness compared to their molded counterparts, such as BD, DF, EF, PR, and PS, leading to the rejection of the null hypothesis. However, the observed laminar separation during impact testing suggests a potential trade‐off in structural integrity that must be carefully managed. While traditional molded materials offer advantages in robustness, the superior impact resistance of 3D‐printed mouthguards highlights their potential for enhanced protection. Nevertheless, the findings also underscore the need for further research to refine testing methods and evaluate the long‐term performance and clinical outcomes of these materials under real‐world conditions.
Conclusion This study demonstrates that 3D‐printed PO‐based mouthguards significantly outperform traditional molded EVA materials in impact energy absorption. These findings highlight the advantages of FFF in improving sports mouthguard protection. 3D‐printed mouthguards fabricated with build orientations perpendicular to the direction of impact demonstrate significantly enhanced impact absorption. Future research should focus on refining material formulations, optimizing manufacturing processes, and exploring alternative testing methods to better replicate real‐world impact scenarios for enhanced athlete safety and performance.
Leonardo Mohamad Nassani: conceptualisation and study design, data analysis and interpretation, revision of the manuscript, approval of the manuscript. Samuel Storts: protocol elaboration, data collection and interpretation. Irina Novopoltseva: protocol revision, critical revision of the manuscript, approval of the manuscript. Lauren Ann Place: manuscript drafting, data collection. Matthew Fogarty: literature review, data collection. Pete Schupska: data collection and analysis, statistical interpretation, manuscript drafting. All the authors made substantial contributions to the manuscript.
The authors declare no conflicts of interest.
Tables S1–S3.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.